Algorithm for Non-proportional Loading in Sequentially Linear Analysis
Yu, C.; Hoogenboom, P.C.J.; Rots, J.G.; Saouma, V.; Bolander, J.; Landis, E.
2016-01-01
Sequentially linear analysis (SLA) is an alternative to the Newton-Raphson method for analyzing the nonlinear behavior of reinforced concrete and masonry structures. In this paper SLA is extended to load cases that are applied one after the other, for example first dead load and then wind load. It
Linearity in Process Languages
Nygaard, Mikkel; Winskel, Glynn
2002-01-01
The meaning and mathematical consequences of linearity (managing without a presumed ability to copy) are studied for a path-based model of processes which is also a model of affine-linear logic. This connection yields an affine-linear language for processes, automatically respecting open......-map bisimulation, in which a range of process operations can be expressed. An operational semantics is provided for the tensor fragment of the language. Different ways to make assemblies of processes lead to different choices of exponential, some of which respect bisimulation....
Sequential spatial processes for image analysis
M.N.M. van Lieshout (Marie-Colette); V. Capasso
2009-01-01
htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects
Sequential spatial processes for image analysis
Lieshout, van M.N.M.; Capasso, V.
2009-01-01
We give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects through a video
Microstructure history effect during sequential thermomechanical processing
Yassar, Reza S.; Murphy, John; Burton, Christina; Horstemeyer, Mark F.; El kadiri, Haitham; Shokuhfar, Tolou
2008-01-01
The key to modeling the material processing behavior is the linking of the microstructure evolution to its processing history. This paper quantifies various microstructural features of an aluminum automotive alloy that undergoes sequential thermomechanical processing which is comprised hot rolling of a 150-mm billet to a 75-mm billet, rolling to 3 mm, annealing, and then cold rolling to a 0.8-mm thickness sheet. The microstructural content was characterized by means of electron backscatter diffraction, scanning electron microscopy, and transmission electron microscopy. The results clearly demonstrate the evolution of precipitate morphologies, dislocation structures, and grain orientation distributions. These data can be used to improve material models that claim to capture the history effects of the processing materials
Campbell and moment measures for finite sequential spatial processes
M.N.M. van Lieshout (Marie-Colette)
2006-01-01
textabstractWe define moment and Campbell measures for sequential spatial processes, prove a Campbell-Mecke theorem, and relate the results to their counterparts in the theory of point processes. In particular, we show that any finite sequential spatial process model can be derived as the vector
Reading Remediation Based on Sequential and Simultaneous Processing.
Gunnison, Judy; And Others
1982-01-01
The theory postulating a dichotomy between sequential and simultaneous processing is reviewed and its implications for remediating reading problems are reviewed. Research is cited on sequential-simultaneous processing for early and advanced reading. A list of remedial strategies based on the processing dichotomy addresses decoding and lexical…
Ayalon, Michal; Watson, Anne; Lerman, Steve
2015-01-01
This study investigates students' ways of attending to linear sequential data in two tasks, and conjectures possible relationships between those ways and elements of the task design. Drawing on the substantial literature about such situations, we focus for this paper on linear rate of change, and on covariation and correspondence approaches to…
Configural and component processing in simultaneous and sequential lineup procedures
Flowe, HD; Smith, HMJ; Karoğlu, N; Onwuegbusi, TO; Rai, L
2015-01-01
Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences...
Linear Algebra and Image Processing
Allali, Mohamed
2010-01-01
We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)
Process tomography via sequential measurements on a single quantum system
Bassa, H
2015-09-01
Full Text Available The authors utilize a discrete (sequential) measurement protocol to investigate quantum process tomography of a single two-level quantum system, with an unknown initial state, undergoing Rabi oscillations. The ignorance of the dynamical parameters...
Configural and component processing in simultaneous and sequential lineup procedures.
Flowe, Heather D; Smith, Harriet M J; Karoğlu, Nilda; Onwuegbusi, Tochukwu O; Rai, Lovedeep
2016-01-01
Configural processing supports accurate face recognition, yet it has never been examined within the context of criminal identification lineups. We tested, using the inversion paradigm, the role of configural processing in lineups. Recent research has found that face discrimination accuracy in lineups is better in a simultaneous compared to a sequential lineup procedure. Therefore, we compared configural processing in simultaneous and sequential lineups to examine whether there are differences. We had participants view a crime video, and then they attempted to identify the perpetrator from a simultaneous or sequential lineup. The test faces were presented either upright or inverted, as previous research has shown that inverting test faces disrupts configural processing. The size of the inversion effect for faces was the same across lineup procedures, indicating that configural processing underlies face recognition in both procedures. Discrimination accuracy was comparable across lineup procedures in both the upright and inversion condition. Theoretical implications of the results are discussed.
Periodic linear differential stochastic processes
Kwakernaak, H.
1975-01-01
Periodic linear differential processes are defined and their properties are analyzed. Equivalent representations are discussed, and the solutions of related optimal estimation problems are given. An extension is presented of Kailath and Geesey’s [1] results concerning the innovations representation
TELEGRAPHS TO INCANDESCENT LAMPS: A SEQUENTIAL PROCESS OF INNOVATION
Laurence J. Malone
2000-01-01
Full Text Available This paper outlines a sequential process of technological innovation in the emergence of the electrical industry in the United States from 1830 to 1880. Successive inventions that realize the commercial possibilities of electricity provided the foundation for an industry where technical knowledge, invention and diffusion were ultimately consolidated within the managerial structure of new firms. The genesis of the industry is traced, sequentially, through the development of the telegraph, arc light and incandescent lamp. Exploring the origins of the telegraph and incandescent lamp reveals a process where a series of inventions and firms result from successful efforts touse scientific principles to create new commodities and markets.
Linear parallel processing machines I
Von Kunze, M
1984-01-01
As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.
On the sequentiality of the multiple Coulomb-excitation process
Dannhaeuser, G.; Boer, J. de
1978-01-01
This paper describes the results of 'computer experiments' illustrating the meaning of a new concept called 'sequentiality'. This concept applies to processes in which the excitation of a given state is mainly accomplished by a large multiple of steps, and it deals with the question as to what extent a transition close to the ground state occurs before one between the highest excited states. (orig.) [de
Spatial Processes in Linear Ordering
von Hecker, Ulrich; Klauer, Karl Christoph; Wolf, Lukas; Fazilat-Pour, Masoud
2016-01-01
Memory performance in linear order reasoning tasks (A > B, B > C, C > D, etc.) shows quicker, and more accurate responses to queries on wider (AD) than narrower (AB) pairs on a hypothetical linear mental model (A -- B -- C -- D). While indicative of an analogue representation, research so far did not provide positive evidence for spatial…
Sequentially solution-processed, nanostructured polymer photovoltaics using selective solvents
Kim, Do Hwan; Mei, Jianguo; Ayzner, Alexander L.; Schmidt, Kristin; Giri, Gaurav; Appleton, Anthony L.; Toney, Michael F.; Bao, Zhenan
2014-01-01
We demonstrate high-performance sequentially solution-processed organic photovoltaics (OPVs) with a power conversion efficiency (PCE) of 5% for blend films using a donor polymer based on the isoindigo-bithiophene repeat unit (PII2T-C10C8) and a fullerene derivative [6,6]-phenyl-C[71]-butyric acid methyl ester (PC71BM). This has been accomplished by systematically controlling the swelling and intermixing processes of the layer with various processing solvents during deposition of the fullerene. We find that among the solvents used for fullerene deposition that primarily swell but do not re-dissolve the polymer underlayer, there were significant microstructural differences between chloro and o-dichlorobenzene solvents (CB and ODCB, respectively). Specifically, we show that the polymer crystallite orientation distribution in films where ODCB was used to cast the fullerene is broad. This indicates that out-of-plane charge transport through a tortuous transport network is relatively efficient due to a large density of inter-grain connections. In contrast, using CB results in primarily edge-on oriented polymer crystallites, which leads to diminished out-of-plane charge transport. We correlate these microstructural differences with photocurrent measurements, which clearly show that casting the fullerene out of ODCB leads to significantly enhanced power conversion efficiencies. Thus, we believe that tuning the processing solvents used to cast the electron acceptor in sequentially-processed devices is a viable way to controllably tune the blend film microstructure. © 2014 The Royal Society of Chemistry.
Quantum versus classical laws for sequential decay processes
Ghirardi, G.C.; Omero, C.; Weber, T.
1979-05-01
The problem of the deviations of the quantum from the classical laws for the occupation numbers of the various levels in a sequential decay process is discussed in general. A factorization formula is obtained for the matrix elements of the complete Green function entering in the expression of the occupation numbers of the levels. Through this formula and using specific forms of the quantum non-decay probability for the single levels, explicit expressions for the occupation numbers of the levels are obtained and compared with the classical ones. (author)
Sequential Detection of Fission Processes for Harbor Defense
Candy, J V; Walston, S E; Chambers, D H
2015-02-12
With the large increase in terrorist activities throughout the world, the timely and accurate detection of special nuclear material (SNM) has become an extremely high priority for many countries concerned with national security. The detection of radionuclide contraband based on their γ-ray emissions has been attacked vigorously with some interesting and feasible results; however, the fission process of SNM has not received as much attention due to its inherent complexity and required predictive nature. In this paper, on-line, sequential Bayesian detection and estimation (parameter) techniques to rapidly and reliably detect unknown fissioning sources with high statistical confidence are developed.
Sequential grouping constraints on across-channel auditory processing
Oxenham, Andrew J.; Dau, Torsten
2005-01-01
Søren Buus was one of the pioneers in the study of across-channel auditory processing. His influential 1985 paper showed that introducing slow fluctuations to a low-frequency masker could reduce the detection thresholds of a high-frequency signal by as much as 25 dB [S. Buus, J. Acoust. Soc. Am. 78......, 1958–1965 (1985)]. Søren explained this surprising result in terms of the spread of masker excitation and across-channel processing of envelope fluctuations. A later study [S. Buus and C. Pan, J. Acoust. Soc. Am. 96, 1445–1457 (1994)] pioneered the use of the same stimuli in tasks where across......-channel processing could either help or hinder performance. In the present set of studies we also use paradigms in which across-channel processing can lead to either improvement or deterioration in performance. We show that sequential grouping constraints can affect both types of paradigm. In particular...
Sequential optimization and reliability assessment method for metal forming processes
Sahai, Atul; Schramm, Uwe; Buranathiti, Thaweepat; Chen Wei; Cao Jian; Xia, Cedric Z.
2004-01-01
Uncertainty is inevitable in any design process. The uncertainty could be due to the variations in geometry of the part, material properties or due to the lack of knowledge about the phenomena being modeled itself. Deterministic design optimization does not take uncertainty into account and worst case scenario assumptions lead to vastly over conservative design. Probabilistic design, such as reliability-based design and robust design, offers tools for making robust and reliable decisions under the presence of uncertainty in the design process. Probabilistic design optimization often involves double-loop procedure for optimization and iterative probabilistic assessment. This results in high computational demand. The high computational demand can be reduced by replacing computationally intensive simulation models with less costly surrogate models and by employing Sequential Optimization and reliability assessment (SORA) method. The SORA method uses a single-loop strategy with a series of cycles of deterministic optimization and reliability assessment. The deterministic optimization and reliability assessment is decoupled in each cycle. This leads to quick improvement of design from one cycle to other and increase in computational efficiency. This paper demonstrates the effectiveness of Sequential Optimization and Reliability Assessment (SORA) method when applied to designing a sheet metal flanging process. Surrogate models are used as less costly approximations to the computationally expensive Finite Element simulations
Sequential processing deficits in schizophrenia: relationship to neuropsychology and genetics.
Hill, S Kristian; Bjorkquist, Olivia; Carrathers, Tarra; Roseberry, Jarett E; Hochberger, William C; Bishop, Jeffrey R
2013-12-01
Utilizing a combination of neuropsychological and cognitive neuroscience approaches may be essential for characterizing cognitive deficits in schizophrenia and eventually assessing cognitive outcomes. This study was designed to compare the stability of select exemplars for these approaches and their correlations in schizophrenia patients with stable treatment and clinical profiles. Reliability estimates for serial order processing were comparable to neuropsychological measures and indicate that experimental serial order processing measures may be less susceptible to practice effects than traditional neuropsychological measures. Correlations were moderate and consistent with a global cognitive factor. Exploratory analyses indicated a potentially critical role of the Met allele of the Catechol-O-methyltransferase (COMT) Val158Met polymorphism in externally paced sequential recall. Experimental measures of serial order processing may reflect frontostriatal dysfunction and be a useful supplement to large neuropsychological batteries. © 2013.
van Staden, J F; Mashamba, Mulalo G; Stefan, Raluca I
2002-09-01
An on-line potentiometric sequential injection titration process analyser for the determination of acetic acid is proposed. A solution of 0.1 mol L(-1) sodium chloride is used as carrier. Titration is achieved by aspirating acetic acid samples between two strong base-zone volumes into a holding coil and by channelling the stack of well-defined zones with flow reversal through a reaction coil to a potentiometric sensor where the peak widths were measured. A linear relationship between peak width and logarithm of the acid concentration was obtained in the range 1-9 g/100 mL. Vinegar samples were analysed without any sample pre-treatment. The method has a relative standard deviation of 0.4% with a sample frequency of 28 samples per hour. The results revealed good agreement between the proposed sequential injection and an automated batch titration method.
Nguyen, Ngoc Minh; Corff, Sylvain Le; Moulines, Éric
2017-12-01
This paper focuses on sequential Monte Carlo approximations of smoothing distributions in conditionally linear and Gaussian state spaces. To reduce Monte Carlo variance of smoothers, it is typical in these models to use Rao-Blackwellization: particle approximation is used to sample sequences of hidden regimes while the Gaussian states are explicitly integrated conditional on the sequence of regimes and observations, using variants of the Kalman filter/smoother. The first successful attempt to use Rao-Blackwellization for smoothing extends the Bryson-Frazier smoother for Gaussian linear state space models using the generalized two-filter formula together with Kalman filters/smoothers. More recently, a forward-backward decomposition of smoothing distributions mimicking the Rauch-Tung-Striebel smoother for the regimes combined with backward Kalman updates has been introduced. This paper investigates the benefit of introducing additional rejuvenation steps in all these algorithms to sample at each time instant new regimes conditional on the forward and backward particles. This defines particle-based approximations of the smoothing distributions whose support is not restricted to the set of particles sampled in the forward or backward filter. These procedures are applied to commodity markets which are described using a two-factor model based on the spot price and a convenience yield for crude oil data.
Food processing with linear accelerators
Wilmer, M.E.
1987-01-01
The application of irradiation techniques to the preservation of foods is reviewed. The utility of the process for several important food groups is discussed in the light of work being done in a number of institutions. Recent findings in food chemistry are used to illustrate some of the potential advantages in using high power accelerators in food processing. Energy and dosage estimates are presented for several cases to illustrate the accelerator requirements and to shed light on the economics of the process
Systolic array processing of the sequential decoding algorithm
Chang, C. Y.; Yao, K.
1989-01-01
A systolic array processing technique is applied to implementing the stack algorithm form of the sequential decoding algorithm. It is shown that sorting, a key function in the stack algorithm, can be efficiently realized by a special type of systolic arrays known as systolic priority queues. Compared to the stack-bucket algorithm, this approach is shown to have the advantages that the decoding always moves along the optimal path, that it has a fast and constant decoding speed and that its simple and regular hardware architecture is suitable for VLSI implementation. Three types of systolic priority queues are discussed: random access scheme, shift register scheme and ripple register scheme. The property of the entries stored in the systolic priority queue is also investigated. The results are applicable to many other basic sorting type problems.
Snyder, Dalton T; Szalwinski, Lucas J; Cooks, R Graham
2017-10-17
Methods of performing precursor ion scans as well as neutral loss scans in a single linear quadrupole ion trap have recently been described. In this paper we report methodology for performing permutations of MS/MS scan modes, that is, ordered combinations of precursor, product, and neutral loss scans following a single ion injection event. Only particular permutations are allowed; the sequences demonstrated here are (1) multiple precursor ion scans, (2) precursor ion scans followed by a single neutral loss scan, (3) precursor ion scans followed by product ion scans, and (4) segmented neutral loss scans. (5) The common product ion scan can be performed earlier in these sequences, under certain conditions. Simultaneous scans can also be performed. These include multiple precursor ion scans, precursor ion scans with an accompanying neutral loss scan, and multiple neutral loss scans. We argue that the new capability to perform complex simultaneous and sequential MS n operations on single ion populations represents a significant step in increasing the selectivity of mass spectrometry.
Linearizing control of continuous anaerobic fermentation processes
Babary, J.P. [Centre National d`Etudes Spatiales (CNES), 31 - Toulouse (France). Laboratoire d`Analyse et d`Architecture des Systemes; Simeonov, I. [Institute of Microbiology, Bulgarian Academy of Sciences (Bulgaria); Ljubenova, V. [Institute of Control and System Research, BAS (Country unknown/Code not available); Dochain, D. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium)
1997-09-01
Biotechnological processes (BTP) involve living organisms. In the anaerobic fermentation (biogas production process) the organic matter is mineralized by microorganisms into biogas (methane and carbon dioxide) in the absence of oxygen. The biogas is an additional energy source. Generally this process is carried out as a continuous BTP. It has been widely used in life process and has been confirmed as a promising method of solving some energy and ecological problems in the agriculture and industry. Because of the very restrictive on-line information the control of this process in continuous mode is often reduced to control of the biogas production rate or the concentration of the polluting organic matter (de-pollution control) at a desired value in the presence of some perturbations. Investigations show that classical linear controllers have good performances only in the linear zone of the strongly non-linear input-output characteristics. More sophisticated robust and with variable structure (VSC) controllers are studied. Due to the strongly non-linear dynamics of the process the performances of the closed loop system may be degrading in this case. The aim of this paper is to investigate different linearizing algorithms for control of a continuous non-linear methane fermentation process using the dilution rate as a control action and taking into account some practical implementation aspects. (authors) 8 refs.
Allen, Mark H.; And Others
1991-01-01
This study found that a group of 20 children (ages 6-12) with autism and a group of 20 children with developmental receptive language disorder both manifested a relative sequential processing deficit. The groups did not differ significantly on overall sequential and simultaneous processing capabilities relative to their degree of language…
Lee, Seong-Soo
1982-01-01
Tenth-grade students (n=144) received training on one of three processing methods: coding-mapping (simultaneous), coding only, or decision tree (sequential). The induced simultaneous processing strategy worked optimally under rule learning, while the sequential strategy was difficult to induce and/or not optimal for rule-learning operations.…
Precomputing Process Noise Covariance for Onboard Sequential Filters
Olson, Corwin G.; Russell, Ryan P.; Carpenter, J. Russell
2017-01-01
Process noise is often used in estimation filters to account for unmodeled and mismodeled accelerations in the dynamics. The process noise covariance acts to inflate the state covariance over propagation intervals, increasing the uncertainty in the state. In scenarios where the acceleration errors change significantly over time, the standard process noise covariance approach can fail to provide effective representation of the state and its uncertainty. Consider covariance analysis techniques provide a method to precompute a process noise covariance profile along a reference trajectory using known model parameter uncertainties. The process noise covariance profile allows significantly improved state estimation and uncertainty representation over the traditional formulation. As a result, estimation performance on par with the consider filter is achieved for trajectories near the reference trajectory without the additional computational cost of the consider filter. The new formulation also has the potential to significantly reduce the trial-and-error tuning currently required of navigation analysts. A linear estimation problem as described in several previous consider covariance analysis studies is used to demonstrate the effectiveness of the precomputed process noise covariance, as well as a nonlinear descent scenario at the asteroid Bennu with optical navigation.
Botet, R.
1996-01-01
A novel scaling of the multiplicity distributions is found in the shattering phase of the sequential fragmentation process with inhibition. The same scaling law is shown to hold in the percolation process. (author)
Botet, R.
1996-01-01
A new kinetic fragmentation model, the Fragmentation - Inactivation -Binary (FIB) model is described where a dissipative process stops randomly the sequential, conservative and off-equilibrium fragmentation process. (K.A.)
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-03-27
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.
Sequential method for the assessment of innovations in computer assisted industrial processes
Suarez Antola R.
1995-01-01
A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online. Published by Oxford University Press 2017. This work is written by US Government employees and are in the public domain in the US.
Bain, Sherry K.
1993-01-01
Analysis of Kaufman Assessment Battery for Children (K-ABC) Sequential and Simultaneous Processing scores of 94 children (ages 6-12) with learning disabilities produced factor patterns generally supportive of the traditional K-ABC Mental Processing structure with the exception of Spatial Memory. The sample exhibited relative processing strengths…
Chaaba, Ali; Aboussaleh, Mohamed; Bousshine, Lahbib; Boudaia, El Hassan
2011-01-01
Limit analysis approaches are widely used to deal with metalworking processes analysis; however, they are applied only for perfectly plastic materials and recently for isotropic hardening ones excluding any kind of kinematic hardening. In the present work, using Implicit Standard Materials concept, sequential limit analysis approach and the finite element method, our objective consists in extending the limit analysis application for including linear and non linear kinematic strain hardenings. Because this plastic flow rule is non associative, the Implicit Standard Materials concept is adopted as a framework of non standard plasticity modeling. The sequential limit analysis procedure which considers the plastic behavior with non linear kinematic strain hardening as a succession of perfectly plastic behavior with yielding surfaces updated after each sequence of limit analysis and geometry updating is applied. Standard kinematic finite element method together with a regularization approach is used for performing two large compression cases (cold forging) in plane strain and axisymmetric conditions
Post-processing through linear regression
van Schaeybroeck, B.; Vannitsem, S.
2011-03-01
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Post-processing through linear regression
B. Van Schaeybroeck
2011-03-01
Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.
These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Sequential double excitations from linear-response time-dependent density functional theory
Mosquera, Martín A.; Ratner, Mark A.; Schatz, George C., E-mail: g-schatz@northwestern.edu [Department of Chemistry, Northwestern University, 2145 Sheridan Rd., Evanston, Illinois 60208 (United States); Chen, Lin X. [Department of Chemistry, Northwestern University, 2145 Sheridan Rd., Evanston, Illinois 60208 (United States); Chemical Sciences and Engineering Division, Argonne National Laboratory, 9700 South Cass Ave., Lemont, Illinois 60439 (United States)
2016-05-28
Traditional UV/vis and X-ray spectroscopies focus mainly on the study of excitations starting exclusively from electronic ground states. However there are many experiments where transitions from excited states, both absorption and emission, are probed. In this work we develop a formalism based on linear-response time-dependent density functional theory to investigate spectroscopic properties of excited states. We apply our model to study the excited-state absorption of a diplatinum(II) complex under X-rays, and transient vis/UV absorption of pyrene and azobenzene.
Sequential-Simultaneous Processing and Reading Skills in Primary Grade Children.
McRae, Sandra G.
1986-01-01
The study examined relationships between two modes of information processing, simultaneous and sequential, and two sets of reading skills, word recognition and comprehension, among 40 second and third grade students. Results indicated there is a relationship between simultaneous processing and reading comprehension. (Author)
Process Creation and Full Sequential Composition in a Name-Passing Calculus
Gehrke, Thomas; Rensink, Arend
This paper presents a first attempt to formulate a process calculus featuring process creation and sequential composition, instead of the more usual parallel composition and action prefixing, in a setting where mobility is achieved by communicating channel names. We discuss the questions of scope
Chondrogianni, Vicky; Marinis, Theodoros
2011-01-01
This study investigates the production and online processing of English tense morphemes by sequential bilingual (L2) Turkish-speaking children with more than three years of exposure to English. Thirty-nine six- to nine-year-old L2 children and twenty-eight typically developing age-matched monolin......This study investigates the production and online processing of English tense morphemes by sequential bilingual (L2) Turkish-speaking children with more than three years of exposure to English. Thirty-nine six- to nine-year-old L2 children and twenty-eight typically developing age...
Sequential specification of time-aware stream processing applications
Geuns, S.J.; Hausmans, J.P.H.M.; Bekooij, Marco Jan Gerrit
Automatic parallelization of Nested Loop Programs (NLPs) is an attractive method to create embedded real-time stream processing applications for multi-core systems. However, the description and parallelization of applications with a time dependent functional behavior has not been considered in NLPs.
Zhang, Zhenjiu; Hu, Hong
2013-01-01
The linear and rotary axes are fundamental parts of multi-axis machine tools. The geometric error components of the axes must be measured for motion error compensation to improve the accuracy of the machine tools. In this paper, a simple method named the three point method is proposed to measure the geometric error of the linear and rotary axes of the machine tools using a laser tracker. A sequential multilateration method, where uncertainty is verified through simulation, is applied to measure the 3D coordinates. Three noncollinear points fixed on the stage of each axis are selected. The coordinates of these points are simultaneously measured using a laser tracker to obtain their volumetric errors by comparing these coordinates with ideal values. Numerous equations can be established using the geometric error models of each axis. The geometric error components can be obtained by solving these equations. The validity of the proposed method is verified through a series of experiments. The results indicate that the proposed method can measure the geometric error of the axes to compensate for the errors in multi-axis machine tools.
Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H
2005-07-01
Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.
Kuchinke, Lars; van der Meer, Elke; Krueger, Frank
2009-01-01
Conceptual knowledge of our world is represented in semantic memory in terms of concepts and semantic relations between concepts. We used functional magnetic resonance imaging (fMRI) to examine the cortical regions underlying the processing of sequential and taxonomic relations. Participants were presented verbal cues and performed three tasks:…
77 FR 43492 - Expedited Vocational Assessment Under the Sequential Evaluation Process
2012-07-25
..., or visit our Internet site, Social Security Online, at http://www.socialsecurity.gov . SUPPLEMENTARY... SOCIAL SECURITY ADMINISTRATION 20 CFR Parts 404 and 416 [Docket No. SSA-2010-0060] RIN 0960-AH26 Expedited Vocational Assessment Under the Sequential Evaluation Process AGENCY: Social Security...
Beachler, Jason A; Krueger, Chad A; Johnson, Anthony E
This process improvement study sought to evaluate the compliance in orthopaedic patients with sequential compression devices and to monitor any improvement in compliance following an educational intervention. All non-intensive care unit orthopaedic primary patients were evaluated at random times and their compliance with sequential compression devices was monitored and recorded. Following a 2-week period of data collection, an educational flyer was displayed in every patient's room and nursing staff held an in-service training event focusing on the importance of sequential compression device use in the surgical patient. Patients were then monitored, again at random, and compliance was recorded. With the addition of a simple flyer and a single in-service on the importance of mechanical compression in the surgical patient, a significant improvement in compliance was documented at the authors' institution from 28% to 59% (p < .0001).
Masuda, Hiroshi; Kanda, Yutaro; Okamoto, Yoshifumi; Hirono, Kazuki; Hoshino, Reona; Wakao, Shinji; Tsuburaya, Tomonori
2017-12-01
It is very important to design electrical machineries with high efficiency from the point of view of saving energy. Therefore, topology optimization (TO) is occasionally used as a design method for improving the performance of electrical machinery under the reasonable constraints. Because TO can achieve a design with much higher degree of freedom in terms of structure, there is a possibility for deriving the novel structure which would be quite different from the conventional structure. In this paper, topology optimization using sequential linear programming using move limit based on adaptive relaxation is applied to two models. The magnetic shielding, in which there are many local minima, is firstly employed as firstly benchmarking for the performance evaluation among several mathematical programming methods. Secondly, induction heating model is defined in 2-D axisymmetric field. In this model, the magnetic energy stored in the magnetic body is maximized under the constraint on the volume of magnetic body. Furthermore, the influence of the location of the design domain on the solutions is investigated.
Simulation based sequential Monte Carlo methods for discretely observed Markov processes
Neal, Peter
2014-01-01
Parameter estimation for discretely observed Markov processes is a challenging problem. However, simulation of Markov processes is straightforward using the Gillespie algorithm. We exploit this ease of simulation to develop an effective sequential Monte Carlo (SMC) algorithm for obtaining samples from the posterior distribution of the parameters. In particular, we introduce two key innovations, coupled simulations, which allow us to study multiple parameter values on the basis of a single sim...
Fishman, M. M.
1985-01-01
The problem of multialternative sequential discernment of processes is formulated in terms of conditionally optimum procedures minimizing the average length of observations, without any probabilistic assumptions about any one occurring process, rather than in terms of Bayes procedures minimizing the average risk. The problem is to find the procedure that will transform inequalities into equalities. The problem is formulated for various models of signal observation and data processing: (1) discernment of signals from background interference by a multichannel system; (2) discernment of pulse sequences with unknown time delay; (3) discernment of harmonic signals with unknown frequency. An asymptotically optimum sequential procedure is constructed which compares the statistics of the likelihood ratio with the mean-weighted likelihood ratio and estimates the upper bound for conditional average lengths of observations. This procedure is shown to remain valid as the upper bound for the probability of erroneous partial solutions decreases approaching zero and the number of hypotheses increases approaching infinity. It also remains valid under certain special constraints on the probability such as a threshold. A comparison with a fixed-length procedure reveals that this sequential procedure decreases the length of observations to one quarter, on the average, when the probability of erroneous partial solutions is low.
Oka, Megan; Whiting, Jason
2013-01-01
In Marriage and Family Therapy (MFT), as in many clinical disciplines, concern surfaces about the clinician/researcher gap. This gap includes a lack of accessible, practical research for clinicians. MFT clinical research often borrows from the medical tradition of randomized control trials, which typically use linear methods, or follow procedures distanced from "real-world" therapy. We review traditional research methods and their use in MFT and propose increased use of methods that are more systemic in nature and more applicable to MFTs: process research, dyadic data analysis, and sequential analysis. We will review current research employing these methods, as well as suggestions and directions for further research. © 2013 American Association for Marriage and Family Therapy.
Direct quantum process tomography via measuring sequential weak values of incompatible observables.
Kim, Yosep; Kim, Yong-Su; Lee, Sang-Yun; Han, Sang-Wook; Moon, Sung; Kim, Yoon-Ho; Cho, Young-Wook
2018-01-15
The weak value concept has enabled fundamental studies of quantum measurement and, recently, found potential applications in quantum and classical metrology. However, most weak value experiments reported to date do not require quantum mechanical descriptions, as they only exploit the classical wave nature of the physical systems. In this work, we demonstrate measurement of the sequential weak value of two incompatible observables by making use of two-photon quantum interference so that the results can only be explained quantum physically. We then demonstrate that the sequential weak value measurement can be used to perform direct quantum process tomography of a qubit channel. Our work not only demonstrates the quantum nature of weak values but also presents potential new applications of weak values in analyzing quantum channels and operations.
Design of Linear-Quadratic-Regulator for a CSTR process
Meghna, P. R.; Saranya, V.; Jaganatha Pandian, B.
2017-11-01
This paper aims at creating a Linear Quadratic Regulator (LQR) for a Continuous Stirred Tank Reactor (CSTR). A CSTR is a common process used in chemical industries. It is a highly non-linear system. Therefore, in order to create the gain feedback controller, the model is linearized. The controller is designed for the linearized model and the concentration and volume of the liquid in the reactor are kept at a constant value as required.
Amplitudes for multiphoton quantum processes in linear optics
UrIas, Jesus
2011-01-01
The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.
Amplitudes for multiphoton quantum processes in linear optics
Urías, Jesús
2011-07-01
The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.
Markov decision processes: a tool for sequential decision making under uncertainty.
Alagoz, Oguzhan; Hsu, Heather; Schaefer, Andrew J; Roberts, Mark S
2010-01-01
We provide a tutorial on the construction and evaluation of Markov decision processes (MDPs), which are powerful analytical tools used for sequential decision making under uncertainty that have been widely used in many industrial and manufacturing applications but are underutilized in medical decision making (MDM). We demonstrate the use of an MDP to solve a sequential clinical treatment problem under uncertainty. Markov decision processes generalize standard Markov models in that a decision process is embedded in the model and multiple decisions are made over time. Furthermore, they have significant advantages over standard decision analysis. We compare MDPs to standard Markov-based simulation models by solving the problem of the optimal timing of living-donor liver transplantation using both methods. Both models result in the same optimal transplantation policy and the same total life expectancies for the same patient and living donor. The computation time for solving the MDP model is significantly smaller than that for solving the Markov model. We briefly describe the growing literature of MDPs applied to medical decisions.
Brown, Peter; Pullan, Wayne; Yang, Yuedong; Zhou, Yaoqi
2016-02-01
The three dimensional tertiary structure of a protein at near atomic level resolution provides insight alluding to its function and evolution. As protein structure decides its functionality, similarity in structure usually implies similarity in function. As such, structure alignment techniques are often useful in the classifications of protein function. Given the rapidly growing rate of new, experimentally determined structures being made available from repositories such as the Protein Data Bank, fast and accurate computational structure comparison tools are required. This paper presents SPalignNS, a non-sequential protein structure alignment tool using a novel asymmetrical greedy search technique. The performance of SPalignNS was evaluated against existing sequential and non-sequential structure alignment methods by performing trials with commonly used datasets. These benchmark datasets used to gauge alignment accuracy include (i) 9538 pairwise alignments implied by the HOMSTRAD database of homologous proteins; (ii) a subset of 64 difficult alignments from set (i) that have low structure similarity; (iii) 199 pairwise alignments of proteins with similar structure but different topology; and (iv) a subset of 20 pairwise alignments from the RIPC set. SPalignNS is shown to achieve greater alignment accuracy (lower or comparable root-mean squared distance with increased structure overlap coverage) for all datasets, and the highest agreement with reference alignments from the challenging dataset (iv) above, when compared with both sequentially constrained alignments and other non-sequential alignments. SPalignNS was implemented in C++. The source code, binary executable, and a web server version is freely available at: http://sparks-lab.org yaoqi.zhou@griffith.edu.au. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Practical Implementations of Advanced Process Control for Linear Systems
Knudsen, Jørgen K . H.; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp
2013-01-01
This paper describes some practical problems encountered, when implementing Advanced Process Control, APC, schemes on linear processes. The implemented APC controllers discussed will be LQR, Riccati MPC and Condensed MPC controllers illustrated by simulation of the Four Tank Process and a lineari......This paper describes some practical problems encountered, when implementing Advanced Process Control, APC, schemes on linear processes. The implemented APC controllers discussed will be LQR, Riccati MPC and Condensed MPC controllers illustrated by simulation of the Four Tank Process...... on pilot plant equipment on the department of Chemical Engineering DTU Lyngby....
Optimal linear filtering of Poisson process with dead time
Glukhova, E.V.
1993-01-01
The paper presents a derivation of an integral equation defining the impulsed transient of optimum linear filtering for evaluation of the intensity of the fluctuating Poisson process with allowance for dead time of transducers
Linear signal processing using silicon micro-ring resonators
Peucheret, Christophe; Ding, Yunhong; Ou, Haiyan
2012-01-01
We review our recent achievements on the use of silicon micro-ring resonators for linear optical signal processing applications, including modulation format conversion, phase-to-intensity modulation conversion and waveform shaping.......We review our recent achievements on the use of silicon micro-ring resonators for linear optical signal processing applications, including modulation format conversion, phase-to-intensity modulation conversion and waveform shaping....
Method for sequentially processing a multi-level interconnect circuit in a vacuum chamber
Routh, D. E.; Sharma, G. C. (Inventor)
1984-01-01
An apparatus is disclosed which includes a vacuum system having a vacuum chamber in which wafers are processed on rotating turntables. The vacuum chamber is provided with an RF sputtering system and a dc magnetron sputtering system. A gas inlet introduces various gases to the vacuum chamber and creates various gas plasma during the sputtering steps. The rotating turntables insure that the respective wafers are present under the sputtering guns for an average amount of time such that consistency in sputtering and deposition is achieved. By continuous and sequential processing of the wafers in a common vacuum chamber without removal, the adverse affects of exposure to atmospheric conditions are eliminated providing higher quality circuit contacts and functional device.
Lohner, Svenja T; Becker, Dirk; Mangold, Klaus-Michael; Tiehm, Andreas
2011-08-01
This article for the first time demonstrates successful application of electrochemical processes to stimulate sequential reductive/oxidative microbial degradation of perchloroethene (PCE) in mineral medium and in contaminated groundwater. In a flow-through column system, hydrogen generation at the cathode supported reductive dechlorination of PCE to cis-dichloroethene (cDCE), vinyl chloride (VC), and ethene (ETH). Electrolytically generated oxygen at the anode allowed subsequent oxidative degradation of the lower chlorinated metabolites. Aerobic cometabolic degradation of cDCE proved to be the bottleneck for complete metabolite elimination. Total removal of chloroethenes was demonstrated for a PCE load of approximately 1.5 μmol/d. In mineral medium, long-term operation with stainless steel electrodes was demonstrated for more than 300 days. In contaminated groundwater, corrosion of the stainless steel anode occurred, whereas DSA (dimensionally stable anodes) proved to be stable. Precipitation of calcareous deposits was observed at the cathode, resulting in a higher voltage demand and reduced dechlorination activity. With DSA and groundwater from a contaminated site, complete degradation of chloroethenes in groundwater was obtained for two months thus demonstrating the feasibility of the sequential bioelectro-approach for field application.
Scriba, Thomas J; Penn-Nicholson, Adam; Shankar, Smitha; Hraha, Tom; Thompson, Ethan G; Sterling, David; Nemes, Elisa; Darboe, Fatoumatta; Suliman, Sara; Amon, Lynn M; Mahomed, Hassan; Erasmus, Mzwandile; Whatney, Wendy; Johnson, John L; Boom, W Henry; Hatherill, Mark; Valvo, Joe; De Groote, Mary Ann; Ochsner, Urs A; Aderem, Alan; Hanekom, Willem A; Zak, Daniel E
2017-11-01
Our understanding of mechanisms underlying progression from Mycobacterium tuberculosis infection to pulmonary tuberculosis disease in humans remains limited. To define such mechanisms, we followed M. tuberculosis-infected adolescents longitudinally. Blood samples from forty-four adolescents who ultimately developed tuberculosis disease (“progressors”) were compared with those from 106 matched controls, who remained healthy during two years of follow up. We performed longitudinal whole blood transcriptomic analyses by RNA sequencing and plasma proteome analyses using multiplexed slow off-rate modified DNA aptamers. Tuberculosis progression was associated with sequential modulation of immunological processes. Type I/II interferon signalling and complement cascade were elevated 18 months before tuberculosis disease diagnosis, while changes in myeloid inflammation, lymphoid, monocyte and neutrophil gene modules occurred more proximally to tuberculosis disease. Analysis of gene expression in purified T cells also revealed early suppression of Th17 responses in progressors, relative to M. tuberculosis-infected controls. This was confirmed in an independent adult cohort who received BCG re-vaccination; transcript expression of interferon response genes in blood prior to BCG administration was associated with suppression of IL-17 expression by BCG-specific CD4 T cells 3 weeks post-vaccination. Our findings provide a timeline to the different immunological stages of disease progression which comprise sequential inflammatory dynamics and immune alterations that precede disease manifestations and diagnosis of tuberculosis disease. These findings have important implications for developing diagnostics, vaccination and host-directed therapies for tuberculosis. Clincialtrials.gov, NCT01119521.
Zhang, Jia-yu; Wang, Zi-jian; Li, Yun; Liu, Ying; Cai, Wei; Li, Chen; Lu, Jian-qiu; Qiao, Yan-jiang
2016-01-15
The analytical methodologies for evaluation of multi-component system in traditional Chinese medicines (TCMs) have been inadequate or unacceptable. As a result, the unclarity of multi-component hinders the sufficient interpretation of their bioactivities. In this paper, an ultra-high-performance liquid chromatography coupled with linear ion trap-Orbitrap (UPLC-LTQ-Orbitrap)-based strategy focused on the comprehensive identification of TCM sequential constituents was developed. The strategy was characterized by molecular design, multiple ion monitoring (MIM), targeted database hits and mass spectral trees similarity filter (MTSF), and even more isomerism discrimination. It was successfully applied in the HRMS data-acquisition and processing of chlorogenic acids (CGAs) in Flos Lonicerae Japonicae (FLJ), and a total of 115 chromatographic peaks attributed to 18 categories were characterized, allowing a comprehensive revelation of CGAs in FLJ for the first time. This demonstrated that MIM based on molecular design could improve the efficiency to trigger MS/MS fragmentation reactions. Targeted database hits and MTSF searching greatly facilitated the processing of extremely large information data. Besides, the introduction of diagnostic product ions (DPIs) discrimination, ClogP analysis, and molecular simulation, raised the efficiency and accuracy to characterize sequential constituents especially position and geometric isomers. In conclusion, the results expanded our understanding on CGAs in FLJ, and the strategy could be exemplary for future research on the comprehensive identification of sequential constituents in TCMs. Meanwhile, it may propose a novel idea for analyzing sequential constituents, and is promising for quality control and evaluation of TCMs. Copyright © 2015 Elsevier B.V. All rights reserved.
High-Order Sparse Linear Predictors for Audio Processing
Giacobello, Daniele; van Waterschoot, Toon; Christensen, Mads Græsbøll
2010-01-01
Linear prediction has generally failed to make a breakthrough in audio processing, as it has done in speech processing. This is mostly due to its poor modeling performance, since an audio signal is usually an ensemble of different sources. Nevertheless, linear prediction comes with a whole set...... of interesting features that make the idea of using it in audio processing not far fetched, e.g., the strong ability of modeling the spectral peaks that play a dominant role in perception. In this paper, we provide some preliminary conjectures and experiments on the use of high-order sparse linear predictors...... in audio processing. These predictors, successfully implemented in modeling the short-term and long-term redundancies present in speech signals, will be used to model tonal audio signals, both monophonic and polyphonic. We will show how the sparse predictors are able to model efﬁciently the different...
State Space Reduction of Linear Processes using Control Flow Reconstruction
van de Pol, Jan Cornelis; Timmer, Mark
2009-01-01
We present a new method for fighting the state space explosion of process algebraic specifications, by performing static analysis on an intermediate format: linear process equations (LPEs). Our method consists of two steps: (1) we reconstruct the LPE's control flow, detecting control flow parameters
State Space Reduction of Linear Processes Using Control Flow Reconstruction
van de Pol, Jan Cornelis; Timmer, Mark; Liu, Zhiming; Ravn, Anders P.
2009-01-01
We present a new method for fighting the state space explosion of process algebraic specifications, by performing static analysis on an intermediate format: linear process equations (LPEs). Our method consists of two steps: (1) we reconstruct the LPE's control flow, detecting control flow parameters
Pinto Mariano, Adriano; Bastos Borba Costa, Caliane; de Franceschi de Angelis, Dejanira; Maugeri Filho, Francisco; Pires Atala, Daniel Ibraim; Wolf Maciel, Maria Regina; Maciel Filho, Rubens
2009-11-01
In this work, the mathematical optimization of a continuous flash fermentation process for the production of biobutanol was studied. The process consists of three interconnected units, as follows: fermentor, cell-retention system (tangential microfiltration), and vacuum flash vessel (responsible for the continuous recovery of butanol from the broth). The objective of the optimization was to maximize butanol productivity for a desired substrate conversion. Two strategies were compared for the optimization of the process. In one of them, the process was represented by a deterministic model with kinetic parameters determined experimentally and, in the other, by a statistical model obtained using the factorial design technique combined with simulation. For both strategies, the problem was written as a nonlinear programming problem and was solved with the sequential quadratic programming technique. The results showed that despite the very similar solutions obtained with both strategies, the problems found with the strategy using the deterministic model, such as lack of convergence and high computational time, make the use of the optimization strategy with the statistical model, which showed to be robust and fast, more suitable for the flash fermentation process, being recommended for real-time applications coupling optimization and control.
Fiebach, Christian J; Schubotz, Ricarda I
2006-05-01
This paper proposes a domain-general model for the functional contribution of ventral premotor cortex (PMv) and adjacent Broca's area to perceptual, cognitive, and motor processing. We propose to understand this frontal region as a highly flexible sequence processor, with the PMv mapping sequential events onto stored structural templates and Broca's Area involved in more complex, hierarchical or hypersequential processing. This proposal is supported by reference to previous functional neuroimaging studies investigating abstract sequence processing and syntactic processing.
Zhang, Zhichao; Ye, Zhibin
2012-08-18
Upon the addition of an equimolar amount of 2,2'-bipyridine, a cationic Pd-diimine complex capable of facilitating "living" ethylene polymerization is switched to catalyze "living" alternating copolymerization of 4-tertbutylstyrene and CO. This unique chemistry is thus employed to synthesize a range of well-defined treelike (hyperbranched polyethylene)-b-(linear polyketone) block polymers.
Optimal periodic inspection of a deterioration process with sequential condition states
Kallen, M.J.; Noortwijk, J.M. van
2006-01-01
The condition of components subject to visual inspections is often evaluated on a discrete scale. If at each inspection a decision is made to do nothing or to perform preventive or corrective maintenance, the proposed decision model allows us to determine the optimal time between periodic inspections, such that the expected average costs per unit of time are minimized. The model which describes the uncertain condition over time is based on a Markov process with sequential phases. The key quantities involved in the model are the probabilities of having to perform either preventive or corrective maintenance before or after an inspection. The costs functions for two scenarios are presented: a scenario in which failure is immediately detected without the need to perform an inspection and a scenario in which failure is only detected by inspection of the object. Analytical results for a special case and algorithmic results for a broad class of Markov processes are derived. The model is illustrated using an application to the periodic inspection of road bridges
Relating Reasoning Methodologies in Linear Logic and Process Algebra
Yuxin Deng
2012-11-01
Full Text Available We show that the proof-theoretic notion of logical preorder coincides with the process-theoretic notion of contextual preorder for a CCS-like calculus obtained from the formula-as-process interpretation of a fragment of linear logic. The argument makes use of other standard notions in process algebra, namely a labeled transition system and a coinductively defined simulation relation. This result establishes a connection between an approach to reason about process specifications and a method to reason about logic specifications.
Short-memory linear processes and econometric applications
Mynbaev, Kairat T
2011-01-01
This book serves as a comprehensive source of asymptotic results for econometric models with deterministic exogenous regressors. Such regressors include linear (more generally, piece-wise polynomial) trends, seasonally oscillating functions, and slowly varying functions including logarithmic trends, as well as some specifications of spatial matrices in the theory of spatial models. The book begins with central limit theorems (CLTs) for weighted sums of short memory linear processes. This part contains the analysis of certain operators in Lp spaces and their employment in the derivation of CLTs
Randhawa, Bikkar S.; And Others
1982-01-01
Replications of two basic experiments in support of the dual-coding processing model with grade 10 and college subjects used pictures, concrete words, and abstract words as stimuli presented at fast and slow rates for immediate and sequential recall. Results seem to be consistent with predictions of simultaneous-successive cognitive theory. (MBR)
Peter, Beate; Button, Le; Stoel-Gammon, Carol; Chapman, Kathy; Raskind, Wendy H.
2013-01-01
The purpose of this study was to evaluate a global deficit in sequential processing as candidate endophenotypein a family with familial childhood apraxia of speech (CAS). Of 10 adults and 13 children in a three-generational family with speech sound disorder (SSD) consistent with CAS, 3 adults and 6 children had past or present SSD diagnoses. Two…
Sequential neural processes in abacus mental addition: an EEG and FMRI case study.
Ku, Yixuan; Hong, Bo; Zhou, Wenjing; Bodner, Mark; Zhou, Yong-Di
2012-01-01
Abacus experts are able to mentally calculate multi-digit numbers rapidly. Some behavioral and neuroimaging studies have suggested a visuospatial and visuomotor strategy during abacus mental calculation. However, no study up to now has attempted to dissociate temporally the visuospatial neural process from the visuomotor neural process during abacus mental calculation. In the present study, an abacus expert performed the mental addition tasks (8-digit and 4-digit addends presented in visual or auditory modes) swiftly and accurately. The 100% correct rates in this expert's task performance were significantly higher than those of ordinary subjects performing 1-digit and 2-digit addition tasks. ERPs, EEG source localizations, and fMRI results taken together suggested visuospatial and visuomotor processes were sequentially arranged during the abacus mental addition with visual addends and could be dissociated from each other temporally. The visuospatial transformation of the numbers, in which the superior parietal lobule was most likely involved, might occur first (around 380 ms) after the onset of the stimuli. The visuomotor processing, in which the superior/middle frontal gyri were most likely involved, might occur later (around 440 ms). Meanwhile, fMRI results suggested that neural networks involved in the abacus mental addition with auditory stimuli were similar to those in the visual abacus mental addition. The most prominently activated brain areas in both conditions included the bilateral superior parietal lobules (BA 7) and bilateral middle frontal gyri (BA 6). These results suggest a supra-modal brain network in abacus mental addition, which may develop from normal mental calculation networks.
A non-linear model of economic production processes
Ponzi, A.; Yasutomi, A.; Kaneko, K.
2003-06-01
We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.
Inhibition of the anaerobic digestion process by linear alkylbenzene sulfonates
Gavala, Hariklia N.; Ahring, Birgitte Kiær
2002-01-01
Linear Alkylbenzene Sulfonates (LAS) are the most widely used synthetic anionic surfactants. They are anthropogenic, toxic compounds and are found in the primary sludge generated in municipal wastewater treatment plants. Primary sludge is usually stabilized anaerobically and therefore it is impor......Linear Alkylbenzene Sulfonates (LAS) are the most widely used synthetic anionic surfactants. They are anthropogenic, toxic compounds and are found in the primary sludge generated in municipal wastewater treatment plants. Primary sludge is usually stabilized anaerobically and therefore...... it is important to investigate the effect of these xenobiotic compounds on an anaerobic environment. The inhibitory effect of Linear Alkylbenzene Sulfonates (LAS) on the acetogenic and methanogenic step of the anaerobic digestion process was studied. LAS inhibit both acetogenesis from propionate...
Zhao, Junpeng; Pahovnik, David; Gnanou, Yves; Hadjichristidis, Nikolaos
2014-01-01
A "catalyst switch" strategy has been used to sequentially polymerize four different heterocyclic monomers. In the first step, epoxides (1,2-butylene oxide and ethylene oxide) were successively polymerized from a monohydroxy or trihydroxy initiator in the presence of a strong phosphazene base promoter (t-BuP4). Then, an excess of diphenyl phosphate (DPP) was introduced, followed by addition and polymerization of a cyclic carbonate (trimethylene carbonate) and a cyclic ester (δ-valerolactone or ε-caprolactone). DPP acted as both neutralizer of the phosphazenium alkoxide (polyether chain end) and activator of the cyclic carbonate/ester. Using this method, linear- and star-tetrablock quarterpolymers were prepared in one pot. This work is emphasizing the strength of the previously developed catalyst switch strategy for the facile metal-free synthesis of complex macromolecular architectures. © 2014 Wiley Periodicals, Inc.
Zhao, Junpeng
2014-08-06
A "catalyst switch" strategy has been used to sequentially polymerize four different heterocyclic monomers. In the first step, epoxides (1,2-butylene oxide and ethylene oxide) were successively polymerized from a monohydroxy or trihydroxy initiator in the presence of a strong phosphazene base promoter (t-BuP4). Then, an excess of diphenyl phosphate (DPP) was introduced, followed by addition and polymerization of a cyclic carbonate (trimethylene carbonate) and a cyclic ester (δ-valerolactone or ε-caprolactone). DPP acted as both neutralizer of the phosphazenium alkoxide (polyether chain end) and activator of the cyclic carbonate/ester. Using this method, linear- and star-tetrablock quarterpolymers were prepared in one pot. This work is emphasizing the strength of the previously developed catalyst switch strategy for the facile metal-free synthesis of complex macromolecular architectures. © 2014 Wiley Periodicals, Inc.
Analysis of membrane fusion as a two-state sequential process: evaluation of the stalk model.
Weinreb, Gabriel; Lentz, Barry R
2007-06-01
We propose a model that accounts for the time courses of PEG-induced fusion of membrane vesicles of varying lipid compositions and sizes. The model assumes that fusion proceeds from an initial, aggregated vesicle state ((A) membrane contact) through two sequential intermediate states (I(1) and I(2)) and then on to a fusion pore state (FP). Using this model, we interpreted data on the fusion of seven different vesicle systems. We found that the initial aggregated state involved no lipid or content mixing but did produce leakage. The final state (FP) was not leaky. Lipid mixing normally dominated the first intermediate state (I(1)), but content mixing signal was also observed in this state for most systems. The second intermediate state (I(2)) exhibited both lipid and content mixing signals and leakage, and was sometimes the only leaky state. In some systems, the first and second intermediates were indistinguishable and converted directly to the FP state. Having also tested a parallel, two-intermediate model subject to different assumptions about the nature of the intermediates, we conclude that a sequential, two-intermediate model is the simplest model sufficient to describe PEG-mediated fusion in all vesicle systems studied. We conclude as well that a fusion intermediate "state" should not be thought of as a fixed structure (e.g., "stalk" or "transmembrane contact") of uniform properties. Rather, a fusion "state" describes an ensemble of similar structures that can have different mechanical properties. Thus, a "state" can have varying probabilities of having a given functional property such as content mixing, lipid mixing, or leakage. Our data show that the content mixing signal may occur through two processes, one correlated and one not correlated with leakage. Finally, we consider the implications of our results in terms of the "modified stalk" hypothesis for the mechanism of lipid pore formation. We conclude that our results not only support this hypothesis but
Induction linear accelerators for commercial photon irradiation processing
Matthews, S.M.
1989-01-01
A number of proposed irradiation processes requires bulk rather than surface exposure with intense applications of ionizing radiation. Typical examples are irradiation of food packaged into pallet size containers, processing of sewer sludge for recycling as landfill and fertilizer, sterilization of prepackaged medical disposals, treatment of municipal water supplies for pathogen reduction, etc. Volumetric processing of dense, bulky products with ionizing radiation requires high energy photon sources because electrons are not penetrating enough to provide uniform bulk dose deposition in thick, dense samples. Induction Linear Accelerator (ILA) technology developed at the Lawrence Livermore National Laboratory promises to play a key role in providing solutions to this problem. This is discussed in this paper
Zhou, Yu; Pearson, John E; Auerbach, Anthony
2005-12-01
We derive the analytical form of a rate-equilibrium free-energy relationship (with slope Phi) for a bounded, linear chain of coupled reactions having arbitrary connecting rate constants. The results confirm previous simulation studies showing that Phi-values reflect the position of the perturbed reaction within the chain, with reactions occurring earlier in the sequence producing higher Phi-values than those occurring later in the sequence. The derivation includes an expression for the transmission coefficients of the overall reaction based on the rate constants of an arbitrary, discrete, finite Markov chain. The results indicate that experimental Phi-values can be used to calculate the relative heights of the energy barriers between intermediate states of the chain but provide no information about the energies of the wells along the reaction path. Application of the equations to the case of diliganded acetylcholine receptor channel gating suggests that the transition-state ensemble for this reaction is nearly flat. Although this mechanism accounts for many of the basic features of diliganded and unliganded acetylcholine receptor channel gating, the experimental rate-equilibrium free-energy relationships appear to be more linear than those predicted by the theory.
NON-LINEAR FINITE ELEMENT MODELING OF DEEP DRAWING PROCESS
Hasan YILDIZ
2004-03-01
Full Text Available Deep drawing process is one of the main procedures used in different branches of industry. Finding numerical solutions for determination of the mechanical behaviour of this process will save time and money. In die surfaces, which have complex geometries, it is hard to determine the effects of parameters of sheet metal forming. Some of these parameters are wrinkling, tearing, and determination of the flow of the thin sheet metal in the die and thickness change. However, the most difficult one is determination of material properties during plastic deformation. In this study, the effects of all these parameters are analyzed before producing the dies. The explicit non-linear finite element method is chosen to be used in the analysis. The numerical results obtained for non-linear material and contact models are also compared with the experiments. A good agreement between the numerical and the experimental results is obtained. The results obtained for the models are given in detail.
Escande, Vincent; Renard, Brice-Loïc; Grison, Claude
2015-04-01
Among the phytotechnologies used for the reclamation of degraded mining sites, phytoextraction aims to diminish the concentration of polluting elements in contaminated soils. However, the biomass resulting from the phytoextraction processes (highly enriched in polluting elements) is too often considered as a problematic waste. The manganese-enriched biomass derived from native Mn-hyperaccumulating plants of New Caledonia was presented here as a valuable source of metallic elements of high interest in chemical catalysis. The preparation of the catalyst Eco-Mn1 and reagent Eco-Mn2 derived from Grevillea exul exul and Grevillea exul rubiginosa was investigated. Their unusual polymetallic compositions allowed to explore new reactivity of low oxidative state of manganese-Mn(II) for Eco-Mn1 and Mn(IV) for Eco-Mn2. Eco-Mn1 was used as a Lewis acid to catalyze the acetalization/elimination of aldehydes into enol ethers with high yields; a new green and stereoselective synthesis of (-)-isopulegol via the carbonyl-ene cyclization of (+)-citronellal was also performed with Eco-Mn1. Eco-Mn2 was used as a mild oxidative reagent and controlled the oxidation of aliphatic alcohols into aldehydes with quantitative yields. Oxidative cleavage was interestingly noticed when Eco-Mn2 was used in the presence of a polyol. Eco-Mn2 allowed direct oxidative iodination of ketones without using iodine, which is strongly discouraged by new environmental legislations. Finally, the combination of the properties in the Eco-Mn catalysts and reagents gave them an unprecedented potential to perform sequential tandem oxidation processes through new green syntheses of p-cymene from (-)-isopulegol and (+)-citronellal; and a new green synthesis of functionalized pyridines by in situ oxidation of 1,4-dihydropyridines.
Linear circuits, systems and signal processing: theory and application
Byrnes, C.I.; Saeks, R.E.; Martin, C.F.
1988-01-01
In part because of its universal role as a first approximation of more complicated behaviour and in part because of the depth and breadth of its principle paradigms, the study of linear systems continues to play a central role in control theory and its applications. Enhancing more traditional applications to aerospace and electronics, application areas such as econometrics, finance, and speech and signal processing have contributed to a renaissance in areas such as realization theory and classical automatic feedback control. Thus, the last few years have witnessed a remarkable research effort expended in understanding both new algorithms and new paradigms for modeling and realization of linear processes and in the analysis and design of robust control strategies. The papers in this volume reflect these trends in both the theory and applications of linear systems and were selected from the invited and contributed papers presented at the 8th International Symposium on the Mathematical Theory of Networks and Systems held in Phoenix on June 15-19, 1987
Giacomino, Agnese; Abollino, Ornella; Malandrino, Mery; Mentasti, Edoardo
2011-03-04
Single and sequential extraction procedures are used for studying element mobility and availability in solid matrices, like soils, sediments, sludge, and airborne particulate matter. In the first part of this review we reported an overview on these procedures and described the applications of chemometric uni- and bivariate techniques and of multivariate pattern recognition techniques based on variable reduction to the experimental results obtained. The second part of the review deals with the use of chemometrics not only for the visualization and interpretation of data, but also for the investigation of the effects of experimental conditions on the response, the optimization of their values and the calculation of element fractionation. We will describe the principles of the multivariate chemometric techniques considered, the aims for which they were applied and the key findings obtained. The following topics will be critically addressed: pattern recognition by cluster analysis (CA), linear discriminant analysis (LDA) and other less common techniques; modelling by multiple linear regression (MLR); investigation of spatial distribution of variables by geostatistics; calculation of fractionation patterns by a mixture resolution method (Chemometric Identification of Substrates and Element Distributions, CISED); optimization and characterization of extraction procedures by experimental design; other multivariate techniques less commonly applied. Copyright © 2010 Elsevier B.V. All rights reserved.
Rudashevskaya, Elena L; Breitwieser, Florian P; Huber, Marie L; Colinge, Jacques; Müller, André C; Bennett, Keiryn L
2013-02-05
The identification and validation of cross-linked peptides by mass spectrometry remains a daunting challenge for protein-protein cross-linking approaches when investigating protein interactions. This includes the fragmentation of cross-linked peptides in the mass spectrometer per se and following database searching, the matching of the molecular masses of the fragment ions to the correct cross-linked peptides. The hybrid linear trap quadrupole (LTQ) Orbitrap Velos combines the speed of the tandem mass spectrometry (MS/MS) duty circle with high mass accuracy, and these features were utilized in the current study to substantially improve the confidence in the identification of cross-linked peptides. An MS/MS method termed multiple and sequential data acquisition method (MSDAM) was developed. Preliminary optimization of the MS/MS settings was performed with a synthetic peptide (TP1) cross-linked with bis[sulfosuccinimidyl] suberate (BS(3)). On the basis of these results, MSDAM was created and assessed on the BS(3)-cross-linked bovine serum albumin (BSA) homodimer. MSDAM applies a series of multiple sequential fragmentation events with a range of different normalized collision energies (NCE) to the same precursor ion. The combination of a series of NCE enabled a considerable improvement in the quality of the fragmentation spectra for cross-linked peptides, and ultimately aided in the identification of the sequences of the cross-linked peptides. Concurrently, MSDAM provides confirmatory evidence from the formation of reporter ions fragments, which reduces the false positive rate of incorrectly assigned cross-linked peptides.
Qi, Wenqiang; Chen, Taojing; Wang, Liang; Wu, Minghong; Zhao, Quanyu; Wei, Wei
2017-03-01
In this study, the sequential process of anaerobic fermentation followed by microalgae cultivation was evaluated from both nutrient and energy recovery standpoints. The effects of different fermentation type on the biogas generation, broth metabolites' composition, algal growth and nutrients' utilization, and energy conversion efficiencies for the whole processes were discussed. When the fermentation was designed to produce hydrogen-dominating biogas, the total energy conversion efficiency (TECE) of the sequential process was higher than that of the methane fermentation one. With the production of hydrogen in anaerobic fermentation, more organic carbon metabolites were left in the broth to support better algal growth with more efficient incorporation of ammonia nitrogen. By applying the sequential process, the heat value conversion efficiency (HVCE) for the wastewater could reach 41.2%, if methane was avoided in the fermentation biogas. The removal efficiencies of organic metabolites and NH 4 + -N in the better case were 100% and 98.3%, respectively. Copyright © 2016 Elsevier Ltd. All rights reserved.
Sequential continuous flow processes for the oxidation of amines and azides by using HOF·MeCN.
McPake, Christopher B; Murray, Christopher B; Sandford, Graham
2012-02-13
The generation and use of the highly potent oxidising agent HOF·MeCN in a controlled single continuous flow process is described. Oxidations of amines and azides to corresponding nitrated systems by using fluorine gas, water and acetonitrile by sequential gas-liquid/liquid-liquid continuous flow procedures are reported. Copyright © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Instantaneous Switching Processes in Quasi-Linear Circuits
Rositsa Angelova
2004-01-01
Full Text Available The paper considers instantaneous processes in electrical circuits produced by the stepwise change of the capacitance of the capacitor and the inductance of the inductor and by the switching on and switching off of the circuit. In order to determine the set of electrical circuits, for which it is possible to explicitly obtain the values of the currents and the voltages at the end of the instantaneous process, a classification of the networks with nonlinear elements is introduced in the paper. The instantaneous switching process in the moment t0 is approximated when T->t0 with a sequence of processes in the interval [t0, T]. For quasi-linear inductive and capacitive circuits; we present the type of the system satisfied by the currents and the voltages, the charges, as well as the fluxes in the interval [t0, T]. From this system, after passage to the limit T->t0, we obtain the formulas for the values of the circuits at the end of the instantaneous process. The obtained results are applied for the analysis of particular processes.
Fang, Yili; Yin, Weizhao; Jiang, Yanbin; Ge, Hengjun; Li, Ping; Wu, Jinhua
2018-05-01
In this study, a sequential Fe 0 /H 2 O 2 reaction and biological process was employed as a low-cost depth treatment method to remove recalcitrant compounds from coal-chemical engineering wastewater after regular biological treatment. First of all, a chemical oxygen demand (COD) and color removal efficiency of 66 and 63% was achieved at initial pH of 6.8, 25 mmol L -1 of H 2 O 2 , and 2 g L -1 of Fe 0 in the Fe 0 /H 2 O 2 reaction. According to the gas chromatography-mass spectrometer (GC-MS) and gas chromatography-flame ionization detector (GC-FID) analysis, the recalcitrant compounds were effectively decomposed into short-chain organic acids such as acetic, propionic, and butyric acids. Although these acids were resistant to the Fe 0 /H 2 O 2 reaction, they were effectively eliminated in the sequential air lift reactor (ALR) at a hydraulic retention time (HRT) of 2 h, resulting in a further decrease of COD and color from 120 to 51 mg L -1 and from 70 to 38 times, respectively. A low operational cost of 0.35 $ m -3 was achieved because pH adjustment and iron-containing sludge disposal could be avoided since a total COD and color removal efficiency of 85 and 79% could be achieved at an original pH of 6.8 by the above sequential process with a ferric ion concentration below 0.8 mg L -1 after the Fe 0 /H 2 O 2 reaction. It indicated that the above sequential process is a promising and cost-effective method for the depth treatment of coal-chemical engineering wastewaters to satisfy discharge requirements.
Induction-linear accelerators for food processing with ionizing radiation
Lagunas-Solar, M.C.
1985-01-01
Electron accelerators with sufficient beam power and reliability of operation will be required for applications in the large-scale radiation processing of food. Electron beams can be converted to the more penetrating bremsstrahlung radiation (X-rays), although at a great expense in useful X-ray power due to small conversion efficiencies. Recent advances in the technology of pulse-power accelerators indicates that Linear Induction Electron Accelerators (LIEA) are capable of sufficiently high-beam current and pulse repetition rate, while delivering ultra-short pulses of high voltage. The application of LIEA systems in food irradiation provides the potential for high product output and compact, modular-type systems readily adaptable to food processing facilities. (orig.)
Can complex cellular processes be governed by simple linear rules?
Selvarajoo, Kumar; Tomita, Masaru; Tsuchiya, Masa
2009-02-01
Complex living systems have shown remarkably well-orchestrated, self-organized, robust, and stable behavior under a wide range of perturbations. However, despite the recent generation of high-throughput experimental datasets, basic cellular processes such as division, differentiation, and apoptosis still remain elusive. One of the key reasons is the lack of understanding of the governing principles of complex living systems. Here, we have reviewed the success of perturbation-response approaches, where without the requirement of detailed in vivo physiological parameters, the analysis of temporal concentration or activation response unravels biological network features such as causal relationships of reactant species, regulatory motifs, etc. Our review shows that simple linear rules govern the response behavior of biological networks in an ensemble of cells. It is daunting to know why such simplicity could hold in a complex heterogeneous environment. Provided physical reasons can be explained for these phenomena, major advancement in the understanding of basic cellular processes could be achieved.
Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process
Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas
2018-05-01
This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.
High-Dimensional Quantum Information Processing with Linear Optics
Fitzpatrick, Casey A.
Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for
Shin, Yong-Uk; Yoo, Ha-Young; Kim, Seonghun; Chung, Kyung-Mi; Park, Yong-Gyun; Hwang, Kwang-Hyun; Hong, Seok Won; Park, Hyunwoong; Cho, Kangwoo; Lee, Jaesang
2017-09-19
A two-stage sequential electro-Fenton (E-Fenton) oxidation followed by electrochemical chlorination (EC) was demonstrated to concomitantly treat high concentrations of organic carbon and ammonium nitrogen (NH 4 + -N) in real anaerobically digested food wastewater (ADFW). The anodic Fenton process caused the rapid mineralization of phenol as a model substrate through the production of hydroxyl radical as the main oxidant. The electrochemical oxidation of NH 4 + by a dimensionally stable anode (DSA) resulted in temporal concentration profiles of combined and free chlorine species that were analogous to those during the conventional breakpoint chlorination of NH 4 + . Together with the minimal production of nitrate, this confirmed that the conversion of NH 4 + to nitrogen gas was electrochemically achievable. The monitoring of treatment performance with varying key parameters (e.g., current density, H 2 O 2 feeding rate, pH, NaCl loading, and DSA type) led to the optimization of two component systems. The comparative evaluation of two sequentially combined systems (i.e., the E-Fenton-EC system versus the EC-E-Fenton system) using the mixture of phenol and NH 4 + under the predetermined optimal conditions suggested the superiority of the E-Fenton-EC system in terms of treatment efficiency and energy consumption. Finally, the sequential E-Fenton-EC process effectively mineralized organic carbon and decomposed NH 4 + -N in the real ADFW without external supply of NaCl.
Jang, M.; Lee, H.J.; Shim, Y. [Korean Mine Reclamation Corporation MIRECO, Seoul (Republic of Korea)
2010-07-01
The processes of coagulation and flocculation using high molecular weight long-chain polymers were applied to treat mine water having fine flocs of which about 93% of the total mass was less than 3.02 {mu} m, representing the size distribution of fine particles. Six different combinations of acryl-type anionic flocculants and polyamine-type cationic coagulants were selected to conduct kinetic tests on turbidity removal in mine water. Optimization studies on the types and concentrations of the coagulant and flocculant showed that the highest rate of turbidity removal was obtained with 10 mg L{sup -1} FL-2949 (coagulant) and 12 mg L{sup -1} A333E (flocculant), which was about 14.4 and 866.7 times higher than that obtained with A333E alone and that obtained through natural precipitation by gravity, respectively. With this optimized condition, the turbidity of mine water was reduced to 0 NTU within 20 min. Zeta potential measurements were conducted to elucidate the removal mechanism of the fine particles, and they revealed that there was a strong linear relationship between the removal rate of each pair of coagulant and flocculant application and the zeta potential differences that were obtained by subtracting the zeta potential of flocculant-treated mine water from the zeta potential of coagulant-treated mine water. Accordingly, through an optimization process, coagulation-flocculation by use of polymers could be advantageous to mine water treatment, because the process rapidly removes fine particles in mine water and only requires a small-scale plant for set-up purposes owing to the short retention time in the process.
Jang, Min; Lee, Hyun-Ju; Shim, Yonsik
2010-04-01
The processes of coagulation and flocculation using high molecular weight long-chain polymers were applied to treat mine water having fine flocs of which about 93% of the total mass was less than 3.02 microm, representing the size distribution of fine particles. Six different combinations of acryl-type anionic flocculants and polyamine-type cationic coagulants were selected to conduct kinetic tests on turbidity removal in mine water. Optimization studies on the types and concentrations of the coagulant and flocculant showed that the highest rate of turbidity removal was obtained with 10 mg L(-1) FL-2949 (coagulant) and 12 mg L(-1) A333E (flocculant), which was about 14.4 and 866.7 times higher than that obtained with A333E alone and that obtained through natural precipitation by gravity, respectively. With this optimized condition, the turbidity of mine water was reduced to 0 NTU within 20 min. Zeta potential measurements were conducted to elucidate the removal mechanism of the fine particles, and they revealed that there was a strong linear relationship between the removal rate of each pair of coagulant and flocculant application and the zeta potential differences that were obtained by subtracting the zeta potential of flocculant-treated mine water from the zeta potential of coagulant-treated mine water. Accordingly, through an optimization process, coagulation-flocculation by use of polymers could be advantageous to mine water treatment, because the process rapidly removes fine particles in mine water and only requires a small-scale plant for set-up purposes owing to the short retention time in the process.
Linear and Nonlinear MHD Wave Processes in Plasmas. Final Report
Tataronis, J. A.
2004-01-01
This program treats theoretically low frequency linear and nonlinear wave processes in magnetized plasmas. A primary objective has been to evaluate the effectiveness of MHD waves to heat plasma and drive current in toroidal configurations. The research covers the following topics: (1) the existence and properties of the MHD continua in plasma equilibria without spatial symmetry; (2) low frequency nonresonant current drive and nonlinear Alfven wave effects; and (3) nonlinear electron acceleration by rf and random plasma waves. Results have contributed to the fundamental knowledge base of MHD activity in symmetric and asymmetric toroidal plasmas. Among the accomplishments of this research effort, the following are highlighted: Identification of the MHD continuum mode singularities in toroidal geometry. Derivation of a third order ordinary differential equation that governs nonlinear current drive in the singular layers of the Alfven continuum modes in axisymmetric toroidal geometry. Bounded solutions of this ODE implies a net average current parallel to the toroidal equilibrium magnetic field. Discovery of a new unstable continuum of the linearized MHD equation in axially periodic circular plasma cylinders with shear and incompressibility. This continuum, which we named ''accumulation continuum'' and which is related to ballooning modes, arises as discrete unstable eigenfrequency accumulate on the imaginary frequency axis in the limit of large mode numbers. Development of techniques to control nonlinear electron acceleration through the action of multiple coherent and random plasmas waves. Two important elements of this program aye student participation and student training in plasma theory
Linear response in the nonequilibrium zero range process
Maes, Christian; Salazar, Alberto
2014-01-01
We explore a number of explicit response formulæ around the boundary driven zero range process to changes in the exit and entrance rates. In such a nonequilibrium regime kinetic (and not only thermodynamic) aspects make a difference in the response. Apart from a number of formal approaches, we illustrate a general decomposition of the linear response into entropic and frenetic contributions, the latter being realized from changes in the dynamical activity at the boundaries. In particular in this way one obtains nonlinear modifications to the Green–Kubo relation. We end by bringing some general remarks about the situation where that nonequilibrium response remains given by the (equilibrium) Kubo formula such as for the density profile in the boundary driven Lorentz gas
Small-scale quantum information processing with linear optics
Bergou, J.A.; Steinberg, A.M.; Mohseni, M.
2005-01-01
Full text: Photons are the ideal systems for carrying quantum information. Although performing large-scale quantum computation on optical systems is extremely demanding, non scalable linear-optics quantum information processing may prove essential as part of quantum communication networks. In addition efficient (scalable) linear-optical quantum computation proposal relies on the same optical elements. Here, by constructing multirail optical networks, we experimentally study two central problems in quantum information science, namely optimal discrimination between nonorthogonal quantum states, and controlling decoherence in quantum systems. Quantum mechanics forbids deterministic discrimination between nonorthogonal states. This is one of the central features of quantum cryptography, which leads to secure communications. Quantum state discrimination is an important primitive in quantum information processing, since it determines the limitations of a potential eavesdropper, and it has applications in quantum cloning and entanglement concentration. In this work, we experimentally implement generalized measurements in an optical system and demonstrate the first optimal unambiguous discrimination between three non-orthogonal states with a success rate of 55 %, to be compared with the 25 % maximum achievable using projective measurements. Furthermore, we present the first realization of unambiguous discrimination between a pure state and a nonorthogonal mixed state. In a separate experiment, we demonstrate how decoherence-free subspaces (DFSs) may be incorporated into a prototype optical quantum algorithm. Specifically, we present an optical realization of two-qubit Deutsch-Jozsa algorithm in presence of random noise. By introduction of localized turbulent airflow we produce a collective optical dephasing, leading to large error rates and demonstrate that using DFS encoding, the error rate in the presence of decoherence can be reduced from 35 % to essentially its pre
Numerical simulation of linear fiction welding (LFW) processes
Fratini, L.; La Spisa, D.
2011-05-01
Solid state welding processes are becoming increasingly important due to a large number of advantages related to joining "unweldable" materials and in particular light weight alloys. Linear friction welding (LFW) has been used successfully to bond non-axisymmetric components of a range of materials including titanium alloys, steels, aluminum alloys, nickel, copper, and also dissimilar material combinations. The technique is useful in the research of quality of the joints and in reducing costs of components and parts of the aeronautic and automotive industries. LFW involves parts to be welded through the relative reciprocating motion of two components under an axial force. In such process the heat source is given by the frictional forces work decaying into heat determining a local softening of the material and proper bonding conditions due to both the temperature increase and the local pressure of the two edges to be welded. This paper is a comparative test between the numerical model in two dimensions, i.e. in plane strain conditions, and in three dimensions of a LFW process of AISI1045 steel specimens. It must be observed that the 3D model assures a faithful simulation of the actual threedimensional material flow, even if the two-dimensional simulation computational times are very short, a few hours instead of several ones as the 3D model. The obtained results were compared with experimental values found out in the scientific literature.
Numerical simulation of linear fiction welding (LFW) processes
Fratini, L.; La Spisa, D.
2011-01-01
Solid state welding processes are becoming increasingly important due to a large number of advantages related to joining ''unweldable'' materials and in particular light weight alloys. Linear friction welding (LFW) has been used successfully to bond non-axisymmetric components of a range of materials including titanium alloys, steels, aluminum alloys, nickel, copper, and also dissimilar material combinations. The technique is useful in the research of quality of the joints and in reducing costs of components and parts of the aeronautic and automotive industries.LFW involves parts to be welded through the relative reciprocating motion of two components under an axial force. In such process the heat source is given by the frictional forces work decaying into heat determining a local softening of the material and proper bonding conditions due to both the temperature increase and the local pressure of the two edges to be welded. This paper is a comparative test between the numerical model in two dimensions, i.e. in plane strain conditions, and in three dimensions of a LFW process of AISI1045 steel specimens. It must be observed that the 3D model assures a faithful simulation of the actual threedimensional material flow, even if the two-dimensional simulation computational times are very short, a few hours instead of several ones as the 3D model. The obtained results were compared with experimental values found out in the scientific literature.
Mamdani-Fuzzy Modeling Approach for Quality Prediction of Non-Linear Laser Lathing Process
Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.
2018-03-01
Lathing is a process to fashioning stock materials into desired cylindrical shapes which usually performed by traditional lathe machine. But, the recent rapid advancements in engineering materials and precision demand gives a great challenge to the traditional method. The main drawback of conventional lathe is its mechanical contact which brings to the undesirable tool wear, heat affected zone, finishing, and dimensional accuracy especially taper quality in machining of stock with high length to diameter ratio. Therefore, a novel approach has been devised to investigate in transforming a 2D flatbed CO2 laser cutting machine into 3D laser lathing capability as an alternative solution. Three significant design parameters were selected for this experiment, namely cutting speed, spinning speed, and depth of cut. Total of 24 experiments were performed with eight (8) sequential runs where they were then replicated three (3) times. The experimental results were then used to establish Mamdani - Fuzzy predictive model where it yields the accuracy of more than 95%. Thus, the proposed Mamdani - Fuzzy modelling approach is found very much suitable and practical for quality prediction of non-linear laser lathing process for cylindrical stocks of 10mm diameter.
Non-linear processes in the Earth atmosphere boundary layer
Grunskaya, Lubov; Valery, Isakevich; Dmitry, Rubay
2013-04-01
The work is connected with studying electromagnetic fields in the resonator Earth-Ionosphere. There is studied the interconnection of tide processes of geophysical and astrophysical origin with the Earth electromagnetic fields. On account of non-linear property of the resonator Earth-Ionosphere the tides (moon and astrophysical tides) in the electromagnetic Earth fields are kinds of polyharmonic nature. It is impossible to detect such non-linear processes with the help of the classical spectral analysis. Therefore to extract tide processes in the electromagnetic fields, the method of covariance matrix eigen vectors is used. Experimental investigations of electromagnetic fields in the atmosphere boundary layer are done at the distance spaced stations, situated on Vladimir State University test ground, at Main Geophysical Observatory (St. Petersburg), on Kamchatka pen., on Lake Baikal. In 2012 there was continued to operate the multichannel synchronic monitoring system of electrical and geomagnetic fields at the spaced apart stations: VSU physical experimental proving ground; the station of the Institute of Solar and Terrestrial Physics of Russian Academy of Science (RAS) at Lake Baikal; the station of the Institute of volcanology and seismology of RAS in Paratunka; the station in Obninsk on the base of the scientific and production society "Typhoon". Such investigations turned out to be possible after developing the method of scanning experimental signal of electromagnetic field into non- correlated components. There was used a method of the analysis of the eigen vectors ofthe time series covariance matrix for exposing influence of the moon tides on Ez. The method allows to distribute an experimental signal into non-correlated periodicities. The present method is effective just in the situation when energetical deposit because of possible influence of moon tides upon the electromagnetic fields is little. There have been developed and realized in program components
Cognitive processes associated with sequential tool use in New Caledonian crows.
Joanna H Wimpenny
Full Text Available BACKGROUND: Using tools to act on non-food objects--for example, to make other tools--is considered to be a hallmark of human intelligence, and may have been a crucial step in our evolution. One form of this behaviour, 'sequential tool use', has been observed in a number of non-human primates and even in one bird, the New Caledonian crow (Corvus moneduloides. While sequential tool use has often been interpreted as evidence for advanced cognitive abilities, such as planning and analogical reasoning, the behaviour itself can be underpinned by a range of different cognitive mechanisms, which have never been explicitly examined. Here, we present experiments that not only demonstrate new tool-using capabilities in New Caledonian crows, but allow examination of the extent to which crows understand the physical interactions involved. METHODOLOGY/PRINCIPAL FINDINGS: In two experiments, we tested seven captive New Caledonian crows in six tasks requiring the use of up to three different tools in a sequence to retrieve food. Our study incorporated several novel features: (i we tested crows on a three-tool problem (subjects were required to use a tool to retrieve a second tool, then use the second tool to retrieve a third one, and finally use the third one to reach for food; (ii we presented tasks of different complexity in random rather than progressive order; (iii we included a number of control conditions to test whether tool retrieval was goal-directed; and (iv we manipulated the subjects' pre-testing experience. Five subjects successfully used tools in a sequence (four from their first trial, and four subjects repeatedly solved the three-tool condition. Sequential tool use did not require, but was enhanced by, pre-training on each element in the sequence ('chaining', an explanation that could not be ruled out in earlier studies. By analyzing tool choice, tool swapping and improvement over time, we show that successful subjects did not use a random
Arora, Amit; Dien, Bruce S; Belyea, Ronald L; Singh, Vijay; Tumbleson, M E; Rausch, Kent D
2010-06-01
The effectiveness of microfiltration (MF) and ultrafiltration (UF) for nutrient recovery from a thin stillage stream was determined. When a stainless steel MF membrane (0.1microm pore size) was used, the content of solids increased from 7.0% to 22.8% with a mean permeate flux rate of 45L/m(2)/h (LMH), fat increased and ash content decreased. UF experiments were conducted in batch mode under constant temperature and flow rate conditions. Permeate flux profiles were evaluated for regenerated cellulose membranes (YM1, YM10 and YM100) with molecular weight cut offs of 1, 10 and 100kDa. UF increased total solids, protein and fat and decreased ash in retentate stream. When permeate streams from MF were subjected to UF, retentate total solids concentrations similar to those of commercial syrup (23-28.8%) were obtained. YM100 had the highest percent permeate flux decline (70% of initial flux) followed by YM10 and YM1 membranes. Sequential filtration improved permeate flux rates of the YM100 membrane (32.6-73.4LMH) but the percent decline was also highest in a sequential MF+YM100 system. Protein recovery was the highest in YM1 retentate. Removal of solids, protein and fat from thin stillage may generate a permeate stream that may improve water removal efficiency and increase water recycling. Copyright 2010 Elsevier Ltd. All rights reserved.
Suarez Antola, R [Universidad Catolica del Uruguay, Montevideo (Uruguay); Artucio, G [Ministerio de Industria Energia y Mineria. Direccion Nacional de Tecnologia Nuclear, Montevideo (Uruguay)
1995-08-01
A sequential method for the assessment of innovations in industrial processes is proposed, using suitable combinations of mathematical modelling and numerical simulation of dynamics. Some advantages and limitations of the proposed method are discussed. tabs.
Zhang, Qinglong; Zhu, Yannan; Jin, Hongxing; Huang, You
2017-04-04
A novel phosphine mediated sequential annulation process to construct functionalized aza-benzobicyclo[4.3.0] derivatives has been developed involving a one-pot sequential catalytic and stoichiometric process, which generates a series of benzobicyclo[4.3.0] compounds containing one quaternary center with up to 94% yield and 20 : 1 dr value. In this reaction, MBH carbonates act as 1,2,3-C 3 synthons.
Tarasevich, Yuri Yu.; Laptev, Valeri V.; Goltseva, Valeria A.; Lebovka, Nikolai I.
2017-07-01
The effect of defects on the behaviour of electrical conductivity, σ, in a monolayer produced by the random sequential adsorption of linear k-mers (particles occupying k adjacent sites) onto a square lattice is studied by means of a Monte Carlo simulation. The k-mers are deposited on the substrate until a jamming state is reached. The presence of defects in the lattice (impurities) and of defects in the k-mers with concentrations of dl and dk, respectively, is assumed. The defects in the lattice are distributed randomly before deposition and these lattice sites are forbidden for the deposition of k-mers. The defects of the k-mers are distributed randomly on the deposited k-mers. The sites filled with k-mers have high electrical conductivity, σk, whereas the empty sites, and the sites filled by either types of defect have a low electrical conductivity, σl, i.e., a high-contrast, σk /σl ≫ 1, is assumed. We examined isotropic (both the possible x and y orientations of a particle are equiprobable) and anisotropic (all particles are aligned along one given direction, y) deposition. To calculate the effective electrical conductivity, the monolayer was presented as a random resistor network and the Frank-Lobb algorithm was used. The effects of the concentrations of defects dl and dk on the electrical conductivity for the values of k =2n, where n = 1 , 2 , … , 5, were studied. Increase of both the dl and dk parameters values resulted in decreases in the value of σ and the suppression of percolation. Moreover, for anisotropic deposition the electrical conductivity along the y direction was noticeably larger than in the perpendicular direction, x. Phase diagrams in the (dl ,dk)-plane for different values of k were obtained.
Sequential inhibitory control processes assessed through simultaneous EEG-fMRI.
Baumeister, Sarah; Hohmann, Sarah; Wolf, Isabella; Plichta, Michael M; Rechtsteiner, Stefanie; Zangl, Maria; Ruf, Matthias; Holz, Nathalie; Boecker, Regina; Meyer-Lindenberg, Andreas; Holtmann, Martin; Laucht, Manfred; Banaschewski, Tobias; Brandeis, Daniel
2014-07-01
Inhibitory response control has been extensively investigated in both electrophysiological (ERP) and hemodynamic (fMRI) studies. However, very few multimodal results address the coupling of these inhibition markers. In fMRI, response inhibition has been most consistently linked to activation of the anterior insula and inferior frontal cortex (IFC), often also the anterior cingulate cortex (ACC). ERP work has established increased N2 and P3 amplitudes during NoGo compared to Go conditions in most studies. Previous simultaneous EEG-fMRI imaging reported association of the N2/P3 complex with activation of areas like the anterior midcingulate cortex (aMCC) and anterior insula. In this study we investigated inhibitory control in 23 healthy young adults (mean age=24.7, n=17 for EEG during fMRI) using a combined Flanker/NoGo task during simultaneous EEG and fMRI recording. Separate fMRI and ERP analysis yielded higher activation in the anterior insula, IFG and ACC as well as increased N2 and P3 amplitudes during NoGo trials in accordance with the literature. Combined analysis modelling sequential N2 and P3 effects through joint parametric modulation revealed correlation of higher N2 amplitude with deactivation in parts of the default mode network (DMN) and the cingulate motor area (CMA) as well as correlation of higher central P3 amplitude with activation of the left anterior insula, IFG and posterior cingulate. The EEG-fMRI results resolve the localizations of these sequential activations. They suggest a general role for allocation of attentional resources and motor inhibition for N2 and link memory recollection and internal reflection to P3 amplitude, in addition to previously described response inhibition as reflected by the anterior insula. Copyright © 2014 Elsevier Inc. All rights reserved.
A linear process-algebraic format for probabilistic systems with data (extended version)
Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette; Timmer, Mark
2010-01-01
This paper presents a novel linear process-algebraic format for probabilistic automata. The key ingredient is a symbolic transformation of probabilistic process algebra terms that incorporate data into this linear format while preserving strong probabilistic bisimulation. This generalises similar
A linear process-algebraic format for probabilistic systems with data
Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette; Timmer, Mark; Gomes, L.; Khomenko, V.; Fernandes, J.M.
This paper presents a novel linear process algebraic format for probabilistic automata. The key ingredient is a symbolic transformation of probabilistic process algebra terms that incorporate data into this linear format while preserving strong probabilistic bisimulation. This generalises similar
Environmentally adaptive processing for shallow ocean applications: A sequential Bayesian approach.
Candy, J V
2015-09-01
The shallow ocean is a changing environment primarily due to temperature variations in its upper layers directly affecting sound propagation throughout. The need to develop processors capable of tracking these changes implies a stochastic as well as an environmentally adaptive design. Bayesian techniques have evolved to enable a class of processors capable of performing in such an uncertain, nonstationary (varying statistics), non-Gaussian, variable shallow ocean environment. A solution to this problem is addressed by developing a sequential Bayesian processor capable of providing a joint solution to the modal function tracking and environmental adaptivity problem. Here, the focus is on the development of both a particle filter and an unscented Kalman filter capable of providing reasonable performance for this problem. These processors are applied to hydrophone measurements obtained from a vertical array. The adaptivity problem is attacked by allowing the modal coefficients and/or wavenumbers to be jointly estimated from the noisy measurement data along with tracking of the modal functions while simultaneously enhancing the noisy pressure-field measurements.
Rezig, Leila; Chibani, Farhat; Chouaibi, Moncef; Dalgalarrondo, Michèle; Hessini, Kamel; Guéguen, Jacques; Hamdi, Salem
2013-08-14
Seed proteins extracted from Tunisian pumpkin seeds ( Cucurbita maxima ) were investigated for their solubility properties and sequentially extracted according to the Osborne procedure. The solubility of pumpkin proteins from seed flour was greatly influenced by pH changes and ionic strength, with higher values in the alkaline pH regions. It also depends on the seed defatting solvent. Protein solubility was decreased by using chloroform/methanol (CM) for lipid extraction instead of pentane (P). On the basis of differential solubility fractionation and depending on the defatting method, the alkali extract (AE) was the major fraction (42.1 (P), 22.3% (CM)) compared to the salt extract (8.6 (P), 7.5% (CM)). In salt, alkali, and isopropanol extracts, all essential amino acids with the exceptions of threonine and lysine met the minimum requirements for preschool children (FAO/WHO/UNU). The denaturation temperatures were 96.6 and 93.4 °C for salt and alkali extracts, respectively. Pumpkin protein extracts with unique protein profiles and higher denaturation temperatures could impart novel characteristics when used as food ingredients.
Modelling sequentially scored item responses
Akkermans, W.
2000-01-01
The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is
Chang Heon Lee; Kih Soo Joe; Won Ho Kim; Euo Chang Jung; Kwang Yong Jee
2009-01-01
A sequential separation procedure has been developed for the determination of transuranic elements and fission products in uranium metal ingot samples from an electrolytic reduction process for a metallization of uranium dioxide to uranium metal in a medium of LiCl-Li 2 O molten salt at 650 deg C. Pu, Np and U were separated using anion-exchange and tri-n-butylphosphate (TBP) extraction chromatography. Cs, Sr, Ba, Ce, Pr, Nd, Sm, Eu, Gd, Zr and Mo were separated in several groups from Am and Cm using TBP and di(2-ethylhexyl)phosphoric acid (HDEHP) extraction chromatography. Effect of Fe, Ni, Cr and Mg, which were corrosion products formed through the process, on the separation of the analytes was investigated in detail. The validity of the separation procedure was evaluated by measuring the recovery of the stable metals and 239 Pu, 237 Np, 241 Am and 244 Cm added to a synthetic uranium metal ingot dissolved solution. (author)
Sakurai, Yasuhisa; Asami, Masahiko; Mannen, Toru
2010-01-15
To determine the features of alexia or agraphia with a left angular or supramarginal gyrus lesion. We assessed the reading and writing abilities of three patients using kanji (Japanese morphograms) and kana (Japanese syllabograms). Patient 1 showed kana alexia and kanji agraphia following a hemorrhage in the left angular gyrus and the adjacent lateral occipital gyri. Patient 2 presented with minimal pure agraphia for both kanji and kana after an infarction in the left angular gyrus involving part of the supramarginal gyrus. Patient 3 also showed moderate pure agraphia for both kanji and kana after an infarction in the left supramarginal and postcentral gyri. All three patients made transposition errors (changing of sequential order of kana characters) in reading. Patient 1 showed letter-by-letter reading and a word-length effect and made substitution errors (changing hiragana [one form of kana] characters in a word to katakana [another form of kana] characters and vice versa) in writing. Alexia occurs as "angular" alexia only when the lesion involves the adjacent lateral occipital gyri. Transposition errors suggest disrupted sequential phonological processing from the angular and lateral occipital gyri to the supramarginal gyrus. Substitution errors suggest impaired allographic conversion between hiragana and katakana attributable to a dysfunction in the angular/lateral occipital gyri.
Strong approximations and sequential change-point analysis for diffusion processes
Mihalache, Stefan-Radu
2012-01-01
In this paper ergodic diffusion processes depending on a parameter in the drift are considered under the assumption that the processes can be observed continuously. Strong approximations by Wiener processes for a stochastic integral and for the estimator process constructed by the one...
Pettit, S C; Moody, M D; Wehbie, R S; Kaplan, A H; Nantermet, P V; Klein, C A; Swanstrom, R
1994-01-01
The proteolytic processing sites of the human immunodeficiency virus type 1 (HIV-1) Gag precursor are cleaved in a sequential manner by the viral protease. We investigated the factors that regulate sequential processing. When full-length Gag protein was digested with recombinant HIV-1 protease in vitro, four of the five major processing sites in Gag were cleaved at rates that differ by as much as 400-fold. Three of these four processing sites were cleaved independently of the others. The CA/p...
Karin Hellauer
2017-03-01
Full Text Available Managed aquifer recharge (MAR systems are an efficient barrier for many contaminants. The biotransformation of trace organic chemicals (TOrCs strongly depends on the redox conditions as well as on the dissolved organic carbon availability. Oxic and oligotrophic conditions are favored for enhanced TOrCs removal which is obtained by combining two filtration systems with an intermediate aeration step. In this study, four parallel laboratory-scale soil column experiments using different intermittent aeration techniques were selected to further optimize TOrCs transformation during MAR: no aeration, aeration with air, pure oxygen and ozone. Rapid oxygen consumption, nitrate reduction and dissolution of manganese confirmed anoxic conditions within the first filtration step, mimicking traditional bank filtration. Aeration with air led to suboxic conditions, whereas oxidation by pure oxygen and ozone led to fully oxic conditions throughout the second system. The sequential system resulted in an equal or better transformation of most TOrCs compared to the single step bank filtration system. Despite the fast oxygen consumption, acesulfame, iopromide, iomeprol and valsartan were degraded within the first infiltration step. The compounds benzotriazole, diclofenac, 4-Formylaminoantipyrine, gabapentin, metoprolol, valsartan acid and venlafaxine revealed a significantly enhanced removal in the systems with intermittent oxidation compared to the conventional treatment without aeration. Further improvement of benzotriazole and gabapentin removal by using pure oxygen confirmed potential oxygen limitation in the second column after aeration with air. Ozonation resulted in an enhanced removal of persistent compounds (i.e., carbamazepine, candesartan, olmesartan and further increased the attenuation of gabapentin, methylbenzotriazole, benzotriazole, and venlafaxine. Diatrizoic acid revealed little degradation in an ozone–MAR hybrid system.
Neural Generalized Predictive Control of a non-linear Process
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1998-01-01
The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...
For the Fenton process in a sequential downflow and upflow system to treat textile dyeing wastewater
Bae, W.K.; Ko, G.B.; Cho, S.J. [Dept. of Civil and Environmental Engineering, Hanyang Univ., Kyounggi (Korea); Lee, S.H. [Dept. of Environmental Engineering, Sangmyung Univ., Cheonan (Korea)
2003-07-01
Wastewater from textile dyeing industry is characterized by high temperature, pH, pollution loading such as color and COD which are containing refractory, toxic and high molecular weight compounds. It is therefore, presumed to be very resistant to microbial degradation. Textile dyeing wastewater is therefore, presumed to be very resistant to microbial degradation. Combined processes are usually applied, which are chemical oxidation and biological process for textile dyeing wastewater in order to satisfy water quality standards. Fenton process as advanced oxidation process is well known as effective process for the removal of color and recalcitrant organics. However, the exactly predominant reaction mechanisms during Fenton process are not well explained among coagulation, oxidation and sedimentation so far. This research attempts to evaluate the predominant reaction with comparable results of ferric coagulation and oxidation for the Fenton process. (orig.)
Sawant, R.M.; Chaudhuri, N.K.; Ramakumar, K.L.
2002-01-01
Determinations of hexamethylene tetramine (HMTA) and urea in the process solutions are required to optimize their concentrations for obtaining high quality ceramic oxide microspheres, for monitoring the washing procedure and for their subsequent recovery, recycling or waste disposal. Determination of urea is the feed solution by conventional procedures is difficult as it contains HMTA. It is more so in the effluent as it contains hydrolytic products like formaldehyde, methylol derivatives of urea, ammonium nitrate and ammonium hydroxide used for washing the gel microspheres. A derivative potentiometric method using a microprocessor-based autotitrator is described. Peaks on the first derivative of the titration plot corresponded to constituents of different basicities. Urea was selectively hydrolyzed at room temperature by the catalytic action of urease enzyme leaving HMTA unaffected. Ammonium hydroxide and ammonium bicarbonate produced from urea and HMTA were sequentially titrated for the analysis of the feed solution to obtain the three corresponding peaks respectively. Two separate titrations were required for the analysis of the effluent solution, which contained free ammonia also. One aliquot was first titrated directly without adding urease (for free ammonia and HMTA) and another aliquot was titrated after treatment with urease. The end points due to the ammonia used for washing and that from urea hydrolysis merged resulting in the appearance of three peaks again. Using this sequential method the relative standard deviations were found to be 0.81% and 1.38% for urea and HMTA, respectively, in eight determinations when the aliquots contained 50 to 75 mg of urea and 75 to 125 mg of HMTA. Feed and effluent solutions of the process stream were analyzed. (author)
Button, Le; Peter, Beate; Stoel-Gammon, Carol; Raskind, Wendy H
2013-03-01
The purpose of this study was to address the hypothesis that childhood apraxia of speech (CAS) is influenced by an underlying deficit in sequential processing that is also expressed in other modalities. In a sample of 21 adults from five multigenerational families, 11 with histories of various familial speech sound disorders, 3 biologically related adults from a family with familial CAS showed motor sequencing deficits in an alternating motor speech task. Compared with the other adults, these three participants showed deficits in tasks requiring high loads of sequential processing, including nonword imitation, nonword reading and spelling. Qualitative error analyses in real word and nonword imitations revealed group differences in phoneme sequencing errors. Motor sequencing ability was correlated with phoneme sequencing errors during real word and nonword imitation, reading and spelling. Correlations were characterized by extremely high scores in one family and extremely low scores in another. Results are consistent with a central deficit in sequential processing in CAS of familial origin.
Giordani, Bruno; And Others
1996-01-01
Evaluation of the Kaufman Assessment Battery for Children (K-ABC) with 130 primary school children in Zaire revealed three findings: (1) the distinction between sequential processing and simultaneous processing was valid; (2) the K-ABC discriminated effectively among grade levels, health and family environment variables, and tribal membership; and…
Gallimore, Casey E; Porter, Andrea L; Barnett, Susanne G
2016-10-25
Objective. To develop and apply a stepwise process to assess achievement of course learning objectives related to advanced pharmacy practice experiences (APPEs) preparedness and inform redesign of sequential skills-based courses. Design. Four steps comprised the assessment and redesign process: (1) identify skills critical for APPE preparedness; (2) utilize focus groups and course evaluations to determine student competence in skill performance; (3) apply course mapping to identify course deficits contributing to suboptimal skill performance; and (4) initiate course redesign to target exposed deficits. Assessment. Focus group participants perceived students were least prepared for skills within the Accreditation Council for Pharmacy Education's pre-APPE core domains of Identification and Assessment of Drug-related Problems and General Communication Abilities. Course mapping identified gaps in instruction, performance, and assessment of skills within aforementioned domains. Conclusions. A stepwise process that identified strengths and weaknesses of a course, was used to facilitate structured course redesign. Strengths of the process included input and corroboration from both preceptors and students. Limitations included feedback from a small number of pharmacy preceptors and increased workload on course coordinators.
A linear time layout algorithm for business process models
Gschwind, T.; Pinggera, J.; Zugal, S.; Reijers, H.A.; Weber, B.
2014-01-01
The layout of a business process model influences how easily it can beunderstood. Existing layout features in process modeling tools often rely on graph representations, but do not take the specific properties of business process models into account. In this paper, we propose an algorithm that is
Arellano-González, Miguel Ángel; González, Ignacio [Universidad Autónoma Metropolitana-Iztapalapa, Departamento de Química, Av. San Rafael Atlixco No. 186, Col. Vicentina, 09340 Mexico D.F. (Mexico); Texier, Anne-Claire, E-mail: actx@xanum.uam.mx [Universidad Autónoma Metropolitana-Iztapalapa, Departamento de Biotecnología, Av. San Rafael Atlixco No. 186, Col. Vicentina, 09340 Mexico, D.F. (Mexico)
2016-08-15
Highlights: • Dechlorination of 2-chlorophenol to phenol was 100% efficient on Pd-Ni/Ti electrode. • An ECCOCEL reactor was efficient and selective to obtain phenol from 2-chlorophenol. • Phenol was totally mineralized in a coupled denitrifying biorreactor. • Global time of 2-chlorophenol mineralization in the combined system was 7.5 h. - Abstract: In this work, a novel approach was applied to obtain the mineralization of 2-chlorophenol (2-CP) in an electrochemical-biological combined system where an electrocatalytic dehydrogenation process (reductive dechlorination) was coupled to a biological denitrification process. Reductive dechlorination of 2-CP was conducted in an ECCOCEL-type reactor on a Pd-Ni/Ti electrode at a potential of −0.40 V vs Ag/AgCl{sub (s)}/KCl{sub (sat)}, achieving 100 percent transformation of 2-CP into phenol. The electrochemically pretreated effluent was fed to a rotating cylinder denitrifying bioreactor where the totality of phenol was mineralized by denitrification, obtaining CO{sub 2} and N{sub 2} as the end products. The total time required for 2-CP mineralization in the combined electrochemical-biological process was 7.5 h. This value is close to those previously reported for electrochemical and advanced oxidation processes but in this case, an efficient process was obtained without accumulation of by-products or generation of excessive energy costs due to the selective electrochemical pretreatment. This study showed that the use of electrochemical reductive pretreatment combined with biological processes could be a promising technology for the removal of recalcitrant molecules, such as chlorophenols, from wastewaters by more efficient, rapid, and environmentally friendly processes.
Farris, Samantha G; Zvolensky, Michael J; Blalock, Janice A; Schmidt, Norman B
2014-05-01
Empirical work has documented a robust and consistent relation between panic attacks and smoking behavior. Theoretical models posit smokers with panic attacks may rely on smoking to help them manage chronically elevated negative affect due to uncomfortable bodily states, which may explain higher levels of nicotine dependence and quit problems. The current study examined the effects of panic attack history on nicotine dependence, perceived barriers for quitting, smoking inflexibility when emotionally distressed, and expired carbon monoxide among 461 treatment-seeking smokers. A multiple mediator path model was evaluated to examine the indirect effects of negative affect and negative affect reduction motives as mediators of the panic attack-smoking relations. Panic attack history was indirectly related to greater levels of nicotine dependence (b = 0.039, CI95% = 0.008, 0.097), perceived barriers to smoking cessation (b = 0.195, CI95% = 0.043, 0.479), smoking inflexibility/avoidance when emotionally distressed (b = 0.188, CI95% = 0.041, 0.445), and higher levels of expired carbon monoxide (b = 0.071, CI95% = 0.010, 0.230) through the sequential effects of negative affect and negative affect smoking motives. The present results provide empirical support for the sequential mediating role of negative affect and smoking motives for negative affect reduction in the relation between panic attacks and a variety of smoking variables in treatment-seeking smokers. These mediating variables are likely important processes to address in smoking cessation treatment, especially in panic-vulnerable smokers.
Mariusz Dudziak
2014-10-01
Full Text Available The results of the study focused on the impact of membrane on the performance of the integrated system photocatalysis/nanofiltration applied to remove mycoestrogens from water are discussed in the paper. The results were compared with ones obtained during single step photocatalysis and nanofiltration processes. The subject of the study were simulated waters containing difference concentration of humic acids to which mycoestrogens were added to the concentration level 500 μg/dm3. It was shown, that the application of integrated system improved the efficiency of mycoestrogens removal in comparison with single step photocatalysis process. In case of nanofiltration, the efficiency of the treatment was comparable in both, integrated and single nanofiltration processes regardless of the membrane type applied. However, it was found that investigated membranes differ in the affinity to fouling and removal rate of inorganic compounds, what should be considered during water treatment technology development.
Lin, Yi-Chun; Hsieh, Ya-Hui; Hou, Huei-Tse
2015-01-01
The development of a usability evaluation method for educational systems or applications, called the self-report-based sequential analysis, is described herein. The method aims to extend the current practice by proposing self-report-based sequential analysis as a new usability method, which integrates the advantages of self-report in survey…
76 FR 56357 - Expedited Vocational Assessment Under the Sequential Evaluation Process
2011-09-13
... work. This proposed new process would not disadvantage any claimant or change the ultimate conclusion... demands of unskilled work.\\28\\ If any of these rules would indicate that the claimant may be disabled or... to them, and an explanation of how we will apply the new rules. [[Page 56360
Sequential Specification of Time-aware Stream Processing Applications (Extended Abstract)
Geuns, S.J.; Hausmans, J.P.H.M.; Bekooij, Marco Jan Gerrit
2012-01-01
Automatic parallelization of Nested Loop Programs (NLPs) is an attractive method to create embedded real-time stream processing applications for multi-core systems. However, the description and parallelization of applications with a time dependent functional behavior has not been considered in NLPs.
José L. Álvarez Cruz
2017-11-01
Full Text Available This study evaluated the efficiency of Fenton (Fe/H2O2 and photo-assisted Fenton (Fe2+/H2O2/UV reactions combined with coagulation-flocculation (C-F processes to remove the chemical oxygen demand (COD in a landfill leachate from Mexico at a laboratory scale. The C-F experiments were carried out in jar test equipment using different FeSO4 concentrations (0.0, 0.6, 1.0, 3, and 6 mM at pH = 3.0. The effluent from the C-F processes were then treated using the Fenton reaction. The experiments were carried out in a 500 mL glass reactor fillet with 250 mL of landfill leachate. Different molar ratio concentrations (Fe/H2O2 were tested (e.g., 1.6, 3.3, 30, 40 and 75, and the reaction was followed until COD analysis showed no significant further variation in concentration or until 90 min of reaction time were completed. The photo-assisted Fenton reaction was carried out using a UV lamp (365 nm, 5 mW with the same Fe/H2O2 molar ratio values described above. The results suggested that the photo-assisted Fenton process is the most efficient oxidation method for removing organic matter and color in the leachate. The photo-assisted Fenton process removed 68% of the COD and 90% of the color at pH = 3 over 30 minutes of reaction time using a H2O2/Fe molar ratio equal to 75 only using a third of the reaction time of the previous process.
Newton, Paul; Chandler, Val; Morris-Thomson, Trish; Sayer, Jane; Burke, Linda
2015-01-01
To map current selection and recruitment processes for newly qualified nurses and to explore the advantages and limitations of current selection and recruitment processes. The need to improve current selection and recruitment practices for newly qualified nurses is highlighted in health policy internationally. A cross-sectional, sequential-explanatory mixed-method design with 4 components: (1) Literature review of selection and recruitment of newly qualified nurses; and (2) Literature review of a public sector professions' selection and recruitment processes; (3) Survey mapping existing selection and recruitment processes for newly qualified nurses; and (4) Qualitative study about recruiters' selection and recruitment processes. Literature searches on the selection and recruitment of newly qualified candidates in teaching and nursing (2005-2013) were conducted. Cross-sectional, mixed-method data were collected from thirty-one (n = 31) individuals in health providers in London who had responsibility for the selection and recruitment of newly qualified nurses using a survey instrument. Of these providers who took part, six (n = 6) purposively selected to be interviewed qualitatively. Issues of supply and demand in the workforce, rather than selection and recruitment tools, predominated in the literature reviews. Examples of tools to measure values, attitudes and skills were found in the nursing literature. The mapping exercise found that providers used many selection and recruitment tools, some providers combined tools to streamline process and assure quality of candidates. Most providers had processes which addressed the issue of quality in the selection and recruitment of newly qualified nurses. The 'assessment centre model', which providers were adopting, allowed for multiple levels of assessment and streamlined recruitment. There is a need to validate the efficacy of the selection tools. © 2014 John Wiley & Sons Ltd.
CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach
Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-08-04
Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.
Modeling of Activated Sludge Process Using Sequential Adaptive Neuro-fuzzy Inference System
Mahsa Vajedi
2014-10-01
Full Text Available In this study, an adaptive neuro-fuzzy inference system (ANFIS has been applied to model activated sludge wastewater treatment process of Mobin petrochemical company. The correlation coefficients between the input variables and the output variable were calculated to determine the input with the highest influence on the output (the quality of the outlet flow in order to compare three neuro-fuzzy structures with different number of parameters. The predictions of the neuro-fuzzy models were compared with those of multilayer artificial neural network models with similar structure. The comparison indicated that both methods resulted in flexible, robust and effective models for the activated sludge system. Moreover, the root mean square of the error for neuro-fuzzy and neural network models were 5.14 and 6.59, respectively, which means the former is the superior method.
Zikmund, T; Kvasnica, L; Týč, M; Křížová, A; Colláková, J; Chmelík, R
2014-11-01
Transmitted light holographic microscopy is particularly used for quantitative phase imaging of transparent microscopic objects such as living cells. The study of the cell is based on extraction of the dynamic data on cell behaviour from the time-lapse sequence of the phase images. However, the phase images are affected by the phase aberrations that make the analysis particularly difficult. This is because the phase deformation is prone to change during long-term experiments. Here, we present a novel algorithm for sequential processing of living cells phase images in a time-lapse sequence. The algorithm compensates for the deformation of a phase image using weighted least-squares surface fitting. Moreover, it identifies and segments the individual cells in the phase image. All these procedures are performed automatically and applied immediately after obtaining every single phase image. This property of the algorithm is important for real-time cell quantitative phase imaging and instantaneous control of the course of the experiment by playback of the recorded sequence up to actual time. Such operator's intervention is a forerunner of process automation derived from image analysis. The efficiency of the propounded algorithm is demonstrated on images of rat fibrosarcoma cells using an off-axis holographic microscope. © 2014 The Authors Journal of Microscopy © 2014 Royal Microscopical Society.
A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations
Qin, Fangjun; Jiang, Sai; Zha, Feng
2018-01-01
In this paper, a sequential multiplicative extended Kalman filter (SMEKF) is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms. PMID:29751538
A Sequential Multiplicative Extended Kalman Filter for Attitude Estimation Using Vector Observations
Fangjun Qin
2018-05-01
Full Text Available In this paper, a sequential multiplicative extended Kalman filter (SMEKF is proposed for attitude estimation using vector observations. In the proposed SMEKF, each of the vector observations is processed sequentially to update the attitude, which can make the measurement model linearization more accurate for the next vector observation. This is the main difference to Murrell’s variation of the MEKF, which does not update the attitude estimate during the sequential procedure. Meanwhile, the covariance is updated after all the vector observations have been processed, which is used to account for the special characteristics of the reset operation necessary for the attitude update. This is the main difference to the traditional sequential EKF, which updates the state covariance at each step of the sequential procedure. The numerical simulation study demonstrates that the proposed SMEKF has more consistent and accurate performance in a wide range of initial estimate errors compared to the MEKF and its traditional sequential forms.
Zhao, Zilong; Dong, Wenyi; Wang, Hongjie; Chen, Guanhan; Wang, Wei; Liu, Zekun; Gao, Yaguang; Zhou, Beili
2017-08-01
Elimination of hypophosphite (HP) was studied as an example of nickel plating effluents treatment by O 3 /H 2 O 2 and sequential Fe(II) catalytic oxidation process. Performance assessment performed with artificial HP solution by varying initial pH and employing various oxidation processes clearly showed that the O 3 /H 2 O 2 ─Fe(II) two-step oxidation process possessed the highest removal efficiency when operating under the same conditions. The effects of O 3 dosing, H 2 O 2 concentration, Fe(II) addition and Fe(II) feeding time on the removal efficiency of HP were further evaluated in terms of apparent kinetic rate constant. Under improved conditions (initial HP concentration of 50 mg L -1 , 75 mg L -1 O 3 , 1 mL L -1 H 2 O 2 , 150 mg L -1 Fe(II) and pH 7.0), standard discharge (<0.5 mg L -1 in China) could be achieved, and the Fe(II) feeding time was found to be the limiting factor for the evolution of apparent kinetic rate constant in the second stage. Characterization studies showed that neutralization process after oxidation treatment favored the improvement of phosphorus removal due to the formation of more metal hydroxides. Moreover, as a comparison with lab-scale Fenton approach, the O 3 /H 2 O 2 ─Fe(II) oxidation process had more competitive advantages with respect to applicable pH range, removal efficiency, sludge production as well as economic costs. Copyright © 2017 Elsevier Ltd. All rights reserved.
R. Barbiero
2007-05-01
Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.
Chen, Ching-Lung; Huang, Chien-Chang; Ho, Kao-Chia; Hsiao, Ping-Xuan; Wu, Meng-Shan; Chang, Jo-Shu
2015-10-01
Although producing biodiesel from microalgae seems promising, there is still a lack of technology for the quick and cost-effective conversion of biodiesel from wet microalgae. This study was aimed to develop a novel microalgal biodiesel producing method, consisting of an open system of microwave disruption, partial dewatering (via combination of methanol treatment and low-speed centrifugation), oil extraction, and transesterification without the pre-removal of the co-solvent, using Chlamydomonas sp. JSC4 with 68.7 wt% water content as the feedstock. Direct transesterification with the disrupted wet microalgae was also conducted. The biomass content of the wet microalgae increased to 56.6 and 60.5 wt%, respectively, after microwave disruption and partial dewatering. About 96.2% oil recovery was achieved under the conditions of: extraction temperature, 45°C; hexane/methanol ratio, 3:1; extraction time, 80 min. Transesterification of the extracted oil reached 97.2% conversion within 15 min at 45°C and 6:1 solvent/methanol ratio with simultaneous Chlorophyll removal during the process. Nearly 100% biodiesel conversion was also obtained while conducting direct transesterification of the disrupted oil-bearing microalgal biomass. Copyright © 2015 Elsevier Ltd. All rights reserved.
On-line validation of linear process models using generalized likelihood ratios
Tylee, J.L.
1981-12-01
A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator
Diagnostic checking in linear processes with infinit variance
Krämer, Walter; Runde, Ralf
1998-01-01
We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.
Button, Le; Peter, Beate; Stoel-Gammon, Carol; Raskind, Wendy H.
2013-01-01
The purpose of this study was to address the hypothesis that childhood apraxia of speech (CAS) is influenced by an underlying deficit in sequential processing that is also expressed in other modalities. In a sample of 21 adults from five multigenerational families, 11 with histories of various familial speech sound disorders, 3 biologically…
Supply Chain Management: from Linear Interactions to Networked Processes
Doina FOTACHE
2006-01-01
Full Text Available Supply Chain Management is a distinctive product, with a tremendous impact on the software applications market. SCM applications are back-end solutions intended to link suppliers, manufacturers, distributors and resellers in a production and distribution network, which allows the enterprise to track and consolidate the flows of materials and data trough the process of manufacturing and distribution of goods/services. The advent of the Web as a major means of conducting business transactions and business-tobusiness communications, coupled with evolving web-based supply chain management (SCM technology, has resulted in a transition period from “linear” supply chain models to "networked" supply chain models. The technologies to enable dynamic process changes and real time interactions between extended supply chain partners are emerging and being deployed at an accelerated pace.
Soares, António Carlos Alves; Pinho, Maria Teresa; Albergaria, José Tomás
2012-01-01
Soil vapor extraction (SVE) is an efficient, well-known and widely applied soil remediation technology. However, under certain conditions it cannot achieve the defined cleanup goals, requiring further treatment, for example, through bioremediation (BR). The sequential application of these technol......Soil vapor extraction (SVE) is an efficient, well-known and widely applied soil remediation technology. However, under certain conditions it cannot achieve the defined cleanup goals, requiring further treatment, for example, through bioremediation (BR). The sequential application...
Florencio, Camila; Cunha, Fernanda M; Badino, Alberto C; Farinas, Cristiane S; Ximenes, Eduardo; Ladisch, Michael R
2016-08-01
Cellulases and hemicellulases from Trichoderma reesei and Aspergillus niger have been shown to be powerful enzymes for biomass conversion to sugars, but the production costs are still relatively high for commercial application. The choice of an effective microbial cultivation process employed for enzyme production is important, since it may affect titers and the profile of protein secretion. We used proteomic analysis to characterize the secretome of T. reesei and A. niger cultivated in submerged and sequential fermentation processes. The information gained was key to understand differences in hydrolysis of steam exploded sugarcane bagasse for enzyme cocktails obtained from two different cultivation processes. The sequential process for cultivating A. niger gave xylanase and β-glucosidase activities 3- and 8-fold higher, respectively, than corresponding activities from the submerged process. A greater protein diversity of critical cellulolytic and hemicellulolytic enzymes were also observed through secretome analyses. These results helped to explain the 3-fold higher yield for hydrolysis of non-washed pretreated bagasse when combined T. reesei and A. niger enzyme extracts from sequential fermentation were used in place of enzymes obtained from submerged fermentation. An enzyme loading of 0.7 FPU cellulase activity/g glucan was surprisingly effective when compared to the 5-15 times more enzyme loadings commonly reported for other cellulose hydrolysis studies. Analyses showed that more than 80% consisted of proteins other than cellulases whose role is important to the hydrolysis of a lignocellulose substrate. Our work combined proteomic analyses and enzymology studies to show that sequential and submerged cultivation methods differently influence both titers and secretion profile of key enzymes required for the hydrolysis of sugarcane bagasse. The higher diversity of feruloyl esterases, xylanases and other auxiliary hemicellulolytic enzymes observed in the enzyme
A quantum analogy for the linear thermodynamics of irreversible processes
Ibanez-Mengual, J.A.; Tejerina-Garcia, A.F.
1981-01-01
In this paper, a model for the transport through a liquid junction of two solutions of the same components, based on quantum-mechanical considerations, is established. A small energy difference, compared with the molecules' energy, among the molecules placed at both sides of the junction is assumed to exist. The liquid junction is assimilated to a potential barrier, getting the material flow from the transmission coefficient of the barrier, when the energy difference is caused by a temperature gradient, a concentration gradient, or both gradients acting together. In all cases, equations formally identical to those of the thermodynamics of irreversible processes are obtained. In the last case, the heat flow is also determined. (author)
Bizer, David S; DeMarzo, Peter M
1992-01-01
The authors study environments in which agents may borrow sequentially from more than one leader. Although debt is prioritized, additional lending imposes an externality on prior debt because, with moral hazard, the probability of repayment of prior loans decreases. Equilibrium interest rates are higher than they would be if borrowers could commit to borrow from at most one bank. Even though the loan terms are less favorable than they would be under commitment, the indebtedness of borrowers i...
Linear Processing Design of Amplify-and-Forward Relays for Maximizing the System Throughput
Qiang Wang
2018-01-01
Full Text Available In this paper, firstly, we study the linear processing of amplify-and-forward (AF relays for the multiple relays multiple users scenario. We regard all relays as one special “relay”, and then the subcarrier pairing, relay selection and channel assignment can be seen as a linear processing of the special “relay”. Under fixed power allocation, the linear processing of AF relays can be regarded as a permutation matrix. Employing the partitioned matrix, we propose an optimal linear processing design for AF relays to find the optimal permutation matrix based on the sorting of the received SNR over the subcarriers from BS to relays and from relays to users, respectively. Then, we prove the optimality of the proposed linear processing scheme. Through the proposed linear processing scheme, we can obtain the optimal subcarrier paring, relay selection and channel assignment under given power allocation in polynomial time. Finally, we propose an iterative algorithm based on the proposed linear processing scheme and Lagrange dual domain method to jointly optimize the joint optimization problem involving the subcarrier paring, relay selection, channel assignment and power allocation. Simulation results illustrate that the proposed algorithm can achieve a perfect performance.
2013-01-01
This book consists of twenty seven chapters, which can be divided into three large categories: articles with the focus on the mathematical treatment of non-linear problems, including the methodologies, algorithms and properties of analytical and numerical solutions to particular non-linear problems; theoretical and computational studies dedicated to the physics and chemistry of non-linear micro-and nano-scale systems, including molecular clusters, nano-particles and nano-composites; and, papers focused on non-linear processes in medico-biological systems, including mathematical models of ferments, amino acids, blood fluids and polynucleic chains.
Sequential stochastic optimization
Cairoli, Renzo
1996-01-01
Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet
Azimuthal asymmetry in processes of nonlinear QED for linearly polarized photon
Bajer, V.N.; Mil'shtejn, A.I.
1994-01-01
Cross sections of nonlinear QED processes (photon-photon scattering, photon splitting in a Coulomb field, and Delbrueck scattering) are considered for linearly polarized initial photon. The cross sections have sizeable azimuthal asymmetry. 15 refs.; 3 figs
Brodhecker, Shirley G.
This practicum report addresses the need to supply Head Start teachers with: (1) specific preschool music objectives; (2) a sequential preschool developmental program in music to match the child's cognitive level; (3) how to choose instructional material to encourage specific basic school readiness skills; and (4) workshops to accomplish these…
Multi-agent sequential hypothesis testing
Kim, Kwang-Ki K.; Shamma, Jeff S.
2014-01-01
incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well
Pettit, S C; Moody, M D; Wehbie, R S; Kaplan, A H; Nantermet, P V; Klein, C A; Swanstrom, R
1994-12-01
The proteolytic processing sites of the human immunodeficiency virus type 1 (HIV-1) Gag precursor are cleaved in a sequential manner by the viral protease. We investigated the factors that regulate sequential processing. When full-length Gag protein was digested with recombinant HIV-1 protease in vitro, four of the five major processing sites in Gag were cleaved at rates that differ by as much as 400-fold. Three of these four processing sites were cleaved independently of the others. The CA/p2 site, however, was cleaved approximately 20-fold faster when the adjacent downstream p2/NC site was blocked from cleavage or when the p2 domain of Gag was deleted. These results suggest that the presence of a C-terminal p2 tail on processing intermediates slows cleavage at the upstream CA/p2 site. We also found that lower pH selectively accelerated cleavage of the CA/p2 processing site in the full-length precursor and as a peptide primarily by a sequence-based mechanism rather than by a change in protein conformation. Deletion of the p2 domain of Gag results in released virions that are less infectious despite the presence of the processed final products of Gag. These findings suggest that the p2 domain of HIV-1 Gag regulates the rate of cleavage at the CA/p2 processing site during sequential processing in vitro and in infected cells and that p2 may function in the proper assembly of virions.
Modelling point patterns with linear structures
Møller, Jesper; Rasmussen, Jakob Gulddahl
2009-01-01
processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....
Modelling point patterns with linear structures
Møller, Jesper; Rasmussen, Jakob Gulddahl
processes whose realizations contain such linear structures. Such a point process is constructed sequentially by placing one point at a time. The points are placed in such a way that new points are often placed close to previously placed points, and the points form roughly line shaped structures. We...... consider simulations of this model and compare with real data....
Granita; Bahar, A.
2015-01-01
This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found
Granita, E-mail: granitafc@gmail.com [Dept. Mathematical Education, State Islamic University of Sultan Syarif Kasim Riau, 28293 Indonesia and Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310,Johor (Malaysia); Bahar, A. [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310,Johor Malaysia and UTM Center for Industrial and Applied Mathematics (UTM-CIAM) (Malaysia)
2015-03-09
This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.
Fast Algorithms for High-Order Sparse Linear Prediction with Applications to Speech Processing
Jensen, Tobias Lindstrøm; Giacobello, Daniele; van Waterschoot, Toon
2016-01-01
In speech processing applications, imposing sparsity constraints on high-order linear prediction coefficients and prediction residuals has proven successful in overcoming some of the limitation of conventional linear predictive modeling. However, this modeling scheme, named sparse linear prediction...... problem with lower accuracy than in previous work. In the experimental analysis, we clearly show that a solution with lower accuracy can achieve approximately the same performance as a high accuracy solution both objectively, in terms of prediction gain, as well as with perceptual relevant measures, when...... evaluated in a speech reconstruction application....
State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.
1978-12-01
The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared
Frank, T D
2005-01-01
Stationary distributions of processes are derived that involve a time delay and are defined by a linear stochastic neutral delay differential equation. The distributions are Gaussian distributions. The variances of the Gaussian distributions are either monotonically increasing or decreasing functions of the time delays. The variances become infinite when fixed points of corresponding deterministic processes become unstable. (letter to the editor)
Strong practical stability and stabilization of uncertain discrete linear repetitive processes
Dabkowski, Pavel; Galkowski, K.; Bachelier, O.; Rogers, E.; Kummert, A.; Lam, J.
2013-01-01
Roč. 20, č. 2 (2013), s. 220-233 ISSN 1070-5325 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : strong practical stability * stabilization * uncertain discrete linear repetitive processes * linear matrix inequality Subject RIV: BC - Control Systems Theory Impact factor: 1.424, year: 2013 http://onlinelibrary.wiley.com/doi/10.1002/nla.812/abstract
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
A linear dynamic model for rotor-spun composite yarn spinning process
Yang, R H; Wang, S Y
2008-01-01
A linear dynamic model is established for the stable rotor-spun composite yarn spinning process. Approximate oscillating frequencies in the vertical and horizontal directions are obtained. By suitable choice of certain processing parameters, the mixture construction after the convergent point can be optimally matched. The presented study is expected to provide a general pathway to understand the motion of the rotor-spun composite yarn spinning process
MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES
程乾生
1990-01-01
The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.
Single-machine common/slack due window assignment problems with linear decreasing processing times
Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia
2017-08-01
This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.
Andrade-Tacca, Cesar Augusto; Chang, Chia-Chi; Chen, Yi-Hung; Ji, Dar-Ren; Wang, Yi-Yu; Yen, Yue-Quen; Chang, Ching-Yuan
2014-01-01
Highlights: • Ultrasonic irradiation (UI) can auto-induce temperature rise. • Esterification at higher temperature (T) by UI offers greater reduction of acid value. • Sequential UI and catalyst dosing enhance esterification conversion efficiency (η). • UR of jatropha oil at higher T results in less water content on ester as product. • A 99.35% of η is achievable via sequential UI and dosing of 5 mL per dose. - Abstract: Production of jatropha-ester (JO-ester) from jatropha oil (JO) under sequential direct-ultrasonic irradiation (UI) with auto-induced temperature rise followed by adding a mixture of methanol/sulfuric-acid catalyst (M/C) dose between high temperature intervals was studied. Comparisons with various doses of 5, 10, 16.6 and 25 mL at different temperature intervals of 108.9–120 °C, 100–120 °C, 85–120 °C and 75–120 °C respectively were performed. System parameters examined include: esterification times (t E ) for UI, settling time (t S ) after esterification and temperature (T). Properties of acid value (AV), iodine value (IV), kinematic viscosity (kV), density (ρ LO ) and water content (m w ) of JO and JO-ester product were measured. The esterification conversion efficiencies (η) were determined and assessed. An η of 99.35% was obtained at temperature interval of 108.9–120 °C with 5 mL per dose for 20 doses and t E of 167.39 min (denoted as Process U 120-5 ), which is slightly higher than η of 98.87% at temperature interval of 75–120 °C with 25 mL per dose for 4 doses and t E of 108.79 min (noted as Process U 120-25 ). The JO-ester obtained via sequential UI with adding doses of 5 mL possess AV of 0.24 mg KOH/g, IV of 124.77 g I 2 /100 g, kV of 9.89 mm 2 /s, ρ LO of 901.73 kg/m 3 and m w of 0.3 wt.% showing that sequential UI and dose at higher temperature interval can give higher reduction of AV compared with 36.56 mg KOH/g of original oil. The effects of t S and t E on AV are of minor and moderate importance
Aksoy, Ozan; Weesie, Jeroen
2014-05-01
In this paper, using a within-subjects design, we estimate the utility weights that subjects attach to the outcome of their interaction partners in four decision situations: (1) binary Dictator Games (DG), second player's role in the sequential Prisoner's Dilemma (PD) after the first player (2) cooperated and (3) defected, and (4) first player's role in the sequential Prisoner's Dilemma game. We find that the average weights in these four decision situations have the following order: (1)>(2)>(4)>(3). Moreover, the average weight is positive in (1) but negative in (2), (3), and (4). Our findings indicate the existence of strong negative and small positive reciprocity for the average subject, but there is also high interpersonal variation in the weights in these four nodes. We conclude that the PD frame makes subjects more competitive than the DG frame. Using hierarchical Bayesian modeling, we simultaneously analyze beliefs of subjects about others' utility weights in the same four decision situations. We compare several alternative theoretical models on beliefs, e.g., rational beliefs (Bayesian-Nash equilibrium) and a consensus model. Our results on beliefs strongly support the consensus effect and refute rational beliefs: there is a strong relationship between own preferences and beliefs and this relationship is relatively stable across the four decision situations. Copyright © 2014 Elsevier Inc. All rights reserved.
Abma, Tineke A.; Cook, Tina; Rämgård, Margaretha; Kleba, Elisabeth; Harris, Janet; Wallerstein, Nina
2017-01-01
Social impact, defined as an effect on society, culture, quality of life, community services, or public policy beyond academia, is widely considered as a relevant requirement for scientific research, especially in the field of health care. Traditionally, in health research, the process of knowledge transfer is rather linear and one-sided and has…
Scene matching based on non-linear pre-processing on reference image and sensed image
Zhong Sheng; Zhang Tianxu; Sang Nong
2005-01-01
To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.
Discounted semi-Markov decision processes : linear programming and policy iteration
Wessels, J.; van Nunen, J.A.E.E.
1975-01-01
For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal
Discounted semi-Markov decision processes : linear programming and policy iteration
Wessels, J.; van Nunen, J.A.E.E.
1974-01-01
For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal
Linear all-optical signal processing using silicon micro-ring resonators
Ding, Yunhong; Ou, Haiyan; Xu, Jing
2016-01-01
Silicon micro-ring resonators (MRRs) are compact and versatile devices whose periodic frequency response can be exploited for a wide range of applications. In this paper, we review our recent work on linear all-optical signal processing applications using silicon MRRs as passive filters. We focus...
Donkin, C.; Brown, S.; Heathcote, A.; Wagenmakers, E.-J.
2011-01-01
Quantitative models for response time and accuracy are increasingly used as tools to draw conclusions about psychological processes. Here we investigate the extent to which these substantive conclusions depend on whether researchers use the Ratcliff diffusion model or the Linear Ballistic
Forced Sequence Sequential Decoding
Jensen, Ole Riis
In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance...... is possible as low as Eb/No=0.6 dB, which is about 1.7 dB below the signal-to-noise ratio that marks the cut-off rate for the convolutional code. This is possible since the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....
Hartman, Brian Davis
1995-01-01
A key drawback to estimating geodetic and geodynamic parameters over time based on satellite laser ranging (SLR) observations is the inability to accurately model all the forces acting on the satellite. Errors associated with the observations and the measurement model can detract from the estimates as well. These 'model errors' corrupt the solutions obtained from the satellite orbit determination process. Dynamical models for satellite motion utilize known geophysical parameters to mathematically detail the forces acting on the satellite. However, these parameters, while estimated as constants, vary over time. These temporal variations must be accounted for in some fashion to maintain meaningful solutions. The primary goal of this study is to analyze the feasibility of using a sequential process noise filter for estimating geodynamic parameters over time from the Laser Geodynamics Satellite (LAGEOS) SLR data. This evaluation is achieved by first simulating a sequence of realistic LAGEOS laser ranging observations. These observations are generated using models with known temporal variations in several geodynamic parameters (along track drag and the J(sub 2), J(sub 3), J(sub 4), and J(sub 5) geopotential coefficients). A standard (non-stochastic) filter and a stochastic process noise filter are then utilized to estimate the model parameters from the simulated observations. The standard non-stochastic filter estimates these parameters as constants over consecutive fixed time intervals. Thus, the resulting solutions contain constant estimates of parameters that vary in time which limits the temporal resolution and accuracy of the solution. The stochastic process noise filter estimates these parameters as correlated process noise variables. As a result, the stochastic process noise filter has the potential to estimate the temporal variations more accurately since the constraint of estimating the parameters as constants is eliminated. A comparison of the temporal
Hung, Shih-Yu; Shen, Ming-Ho; Chang, Ying-Pin
2009-01-01
The sequential neural-network approximation and orthogonal array (SNAOA) were used to shorten the cooling time for the rapid cooling process such that the normalized maximum resolved stress in silicon wafer was always below one in this study. An orthogonal array was first conducted to obtain the initial solution set. The initial solution set was treated as the initial training sample. Next, a back-propagation sequential neural network was trained to simulate the feasible domain to obtain the optimal parameter setting. The size of the training sample was greatly reduced due to the use of the orthogonal array. In addition, a restart strategy was also incorporated into the SNAOA so that the searching process may have a better opportunity to reach a near global optimum. In this work, we considered three different cooling control schemes during the rapid thermal process: (1) downward axial gas flow cooling scheme; (2) upward axial gas flow cooling scheme; (3) dual axial gas flow cooling scheme. Based on the maximum shear stress failure criterion, the other control factors such as flow rate, inlet diameter, outlet width, chamber height and chamber diameter were also examined with respect to cooling time. The results showed that the cooling time could be significantly reduced using the SNAOA approach
Effect of Process Parameters on Friction Model in Computer Simulation of Linear Friction Welding
A. Yamileva
2014-07-01
Full Text Available The friction model is important part of a numerical model of linear friction welding. Its selection determines the accuracy of the results. Existing models employ the classical law of Amonton-Coulomb where the friction coefficient is either constant or linearly dependent on a single parameter. Determination of the coefficient of friction is a time consuming process that requires a lot of experiments. So the feasibility of determinating the complex dependence should be assessing by analysis of effect of approximating law for friction model on simulation results.
Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.
Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray
2017-07-11
Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.
A solution for automatic parallelization of sequential assembly code
Kovačević Đorđe
2013-01-01
Full Text Available Since modern multicore processors can execute existing sequential programs only on a single core, there is a strong need for automatic parallelization of program code. Relying on existing algorithms, this paper describes one new software solution tool for parallelization of sequential assembly code. The main goal of this paper is to develop the parallelizator which reads sequential assembler code and at the output provides parallelized code for MIPS processor with multiple cores. The idea is the following: the parser translates assembler input file to program objects suitable for further processing. After that the static single assignment is done. Based on the data flow graph, the parallelization algorithm separates instructions on different cores. Once sequential code is parallelized by the parallelization algorithm, registers are allocated with the algorithm for linear allocation, and the result at the end of the program is distributed assembler code on each of the cores. In the paper we evaluate the speedup of the matrix multiplication example, which was processed by the parallelizator of assembly code. The result is almost linear speedup of code execution, which increases with the number of cores. The speed up on the two cores is 1.99, while on 16 cores the speed up is 13.88.
Effects of noise, nonlinear processing, and linear filtering on perceived music quality.
Arehart, Kathryn H; Kates, James M; Anderson, Melinda C
2011-03-01
The purpose of this study was to determine the relative impact of different forms of hearing aid signal processing on quality ratings of music. Music quality was assessed using a rating scale for three types of music: orchestral classical music, jazz instrumental, and a female vocalist. The music stimuli were subjected to a wide range of simulated hearing aid processing conditions including, (1) noise and nonlinear processing, (2) linear filtering, and (3) combinations of noise, nonlinear, and linear filtering. Quality ratings were measured in a group of 19 listeners with normal hearing and a group of 15 listeners with sensorineural hearing impairment. Quality ratings in both groups were generally comparable, were reliable across test sessions, were impacted more by noise and nonlinear signal processing than by linear filtering, and were significantly affected by the genre of music. The average quality ratings for music were reasonably well predicted by the hearing aid speech quality index (HASQI), but additional work is needed to optimize the index to the wide range of music genres and processing conditions included in this study.
Luiz Augusto da Cruz Meleiro
2005-06-01
Full Text Available In this work a MIMO non-linear predictive controller was developed for an extractive alcoholic fermentation process. The internal model of the controller was represented by two MISO Functional Link Networks (FLNs, identified using simulated data generated from a deterministic mathematical model whose kinetic parameters were determined experimentally. The FLN structure presents as advantages fast training and guaranteed convergence, since the estimation of the weights is a linear optimization problem. Besides, the elimination of non-significant weights generates parsimonious models, which allows for fast execution in an MPC-based algorithm. The proposed algorithm showed good potential in identification and control of non-linear processes.Neste trabalho um controlador preditivo não linear multivariável foi desenvolvido para um processo de fermentação alcoólica extrativa. O modelo interno do controlador foi representado por duas redes do tipo Functional Link (FLN, identificadas usando dados de simulação gerados a partir de um modelo validado experimentalmente. A estrutura FLN apresenta como vantagem o treinamento rápido e convergência garantida, já que a estimação dos seus pesos é um problema de otimização linear. Além disso, a eliminação de pesos não significativos gera modelos parsimoniosos, o que permite a rápida execução em algoritmos de controle preditivo baseado em modelo. Os resultados mostram que o algoritmo proposto tem grande potencial para identificação e controle de processos não lineares.
Forced Sequence Sequential Decoding
Jensen, Ole Riis; Paaske, Erik
1998-01-01
We describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon (RS) codes with nonuniform profile. With this scheme decoding with good performance is possible as low...... as Eb/N0=0.6 dB, which is about 1.25 dB below the signal-to-noise ratio (SNR) that marks the cutoff rate for the full system. Accounting for about 0.45 dB due to the outer codes, sequential decoding takes place at about 1.7 dB below the SNR cutoff rate for the convolutional code. This is possible since...... the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...
Sequentially linear analysis for simulating brittle failure
van de Graaf, A.V.
2017-01-01
The numerical simulation of brittle failure at structural level with nonlinear finite
element analysis (NLFEA) remains a challenge due to robustness issues. We attribute these problems to the dimensions of real-world structures combined with softening behavior and negative tangent stiffness at
Sushma Santapuri
2016-10-01
Full Text Available A unified thermodynamic framework for the characterization of functional materials is developed. This framework encompasses linear reversible and irreversible processes with thermal, electrical, magnetic, and/or mechanical effects coupled. The comprehensive framework combines the principles of classical equilibrium and non-equilibrium thermodynamics with electrodynamics of continua in the infinitesimal strain regime.In the first part of this paper, linear Thermo-Electro-Magneto-Mechanical (TEMM quasistatic processes are characterized. Thermodynamic stability conditions are further imposed on the linear constitutive model and restrictions on the corresponding material constants are derived. The framework is then extended to irreversible transport phenomena including thermoelectric, thermomagnetic and the state-of-the-art spintronic and spin caloritronic effects. Using Onsager's reciprocity relationships and the dissipation inequality, restrictions on the kinetic coefficients corresponding to charge, heat and spin transport processes are derived. All the constitutive models are accompanied by multiphysics interaction diagrams that highlight the various processes that can be characterized using this framework. Keywords: Applied mathematics, Materials science, Thermodynamics
Optimal Sequential Rules for Computer-Based Instruction.
Vos, Hans J.
1998-01-01
Formulates sequential rules for adapting the appropriate amount of instruction to learning needs in the context of computer-based instruction. Topics include Bayesian decision theory, threshold and linear-utility structure, psychometric model, optimal sequential number of test questions, and an empirical example of sequential instructional…
Ruotolo, Francesco; Ruggiero, Gennaro; Vinciguerra, Michela; Iachini, Tina
2012-02-01
The aim of this research is to assess whether the crucial factor in determining the characteristics of blind people's spatial mental images is concerned with the visual impairment per se or the processing style that the dominant perceptual modalities used to acquire spatial information impose, i.e. simultaneous (vision) vs sequential (kinaesthesis). Participants were asked to learn six positions in a large parking area via movement alone (congenitally blind, adventitiously blind, blindfolded sighted) or with vision plus movement (simultaneous sighted, sequential sighted), and then to mentally scan between positions in the path. The crucial manipulation concerned the sequential sighted group. Their visual exploration was made sequential by putting visual obstacles within the pathway in such a way that they could not see simultaneously the positions along the pathway. The results revealed a significant time/distance linear relation in all tested groups. However, the linear component was lower in sequential sighted and blind participants, especially congenital. Sequential sighted and congenitally blind participants showed an almost overlapping performance. Differences between groups became evident when mentally scanning farther distances (more than 5m). This threshold effect could be revealing of processing limitations due to the need of integrating and updating spatial information. Overall, the results suggest that the characteristics of the processing style rather than the visual impairment per se affect blind people's spatial mental images. Copyright © 2011 Elsevier B.V. All rights reserved.
Mochamad Syamsiro
2014-08-01
Full Text Available In this study, the performance of several differently treated natural zeolites in a sequential pyrolysis and catalytic reforming of plastic materials i.e. polypropylene (PP and polystyrene (PS were investigated. The experiments were carried out on two stage reactor using semi-batch system. The samples were degraded at 500°C in the pyrolysis reactor and then reformed at 450°C in the catalytic reformer. The results show that the mordenite-type natural zeolites could be used as efficient catalysts for the conversion of PP and PS into liquid and gaseous fuel. The treatment of natural zeolites in HCl solution showed an increase of the surface area and the Si/Al ratio while nickel impregnation increased the activity of catalyst. As a result, liquid product was reduced while gaseous product was increased. For PP, the fraction of gasoline (C5-C12 increased in the presence of catalysts. Natural zeolite catalysts could also be used to decrease the heavy oil fraction (>C20. The gaseous products were found that propene was dominated in all conditions. For PS, propane and propene were the main components of gases in the presence of nickel impregnated natural zeolite catalyst. Propene was dominated in pyrolysis over natural zeolite catalyst. The high quality of gaseous product can be used as a fuel either for driving gas engines or for dual-fuel diesel engine.
Observations of linear and nonlinear processes in the foreshock wave evolution
Y. Narita
2007-07-01
Full Text Available Waves in the foreshock region are studied on the basis of a hypothesis that the linear process first excites the waves and further wave-wave nonlinearities distribute scatter the energy of the primary waves into a number of daughter waves. To examine this wave evolution scenario, the dispersion relations, the wave number spectra of the magnetic field energy, and the dimensionless cross helicity are determined from the observations made by the four Cluster spacecraft. The results confirm that the linear process is the ion/ion right-hand resonant instability, but the wave-wave interactions are not clearly identified. We discuss various reasons why the test for the wave-wave nonlinearities fails, and conclude that the higher order statistics would provide a direct evidence for the wave coupling phenomena.
Hadronic cross-sections in two photon processes at a future linear collider
Godbole, Rohini M.; Roeck, Albert de; Grau, Agnes; Pancheri, Giulia
2003-01-01
In this note we address the issue of measurability of the hadronic cross-sections at a future photon collider as well as for the two-photon processes at a future high energy linear e + e - collider. We extend, to higher energy, our previous estimates of the accuracy with which the γ γ cross-section needs to be measured, in order to distinguish between different theoretical models of energy dependence of the total cross-sections. We show that the necessary precision to discriminate among these models is indeed possible at future linear colliders in the Photon Collider option. Further we note that even in the e + e - option a measurement of the hadron production cross-section via γ γ processes, with an accuracy necessary to allow discrimination between different theoretical models, should be possible. We also comment briefly on the implications of these predictions for hadronic backgrounds at the future TeV energy e + e - collider CLIC. (author)
Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing.
Yan, Leyang; Zhang, Hui; Ye, Peiqing
2017-04-06
Accurate mover position is vital for a permanent magnet tubular linear motor (PMTLM) control system. In this paper, two linear Hall sensors are utilized to detect the mover position. However, Hall sensor signals contain third-order harmonics, creating errors in mover position detection. To filter out the third-order harmonics, a signal processing method based on the extended Kalman filter (EKF) is presented. The limitation of conventional processing method is first analyzed, and then EKF is adopted to detect the mover position. In the EKF model, the amplitude of the fundamental component and the percentage of the harmonic component are taken as state variables, and they can be estimated based solely on the measured sensor signals. Then, the harmonic component can be calculated and eliminated. The proposed method has the advantages of faster convergence, better stability and higher accuracy. Finally, experimental results validate the effectiveness and superiority of the proposed method.
Processing for maximizing the level of crystallinity in linear aromatic polyimides
St.clair, Terry L. (Inventor)
1991-01-01
The process of the present invention includes first treating a polyamide acid (such as LARC-TPI polyamide acid) in an amide-containing solvent (such as N-methyl pyrrolidone) with an aprotic organic base (such as triethylamine), followed by dehydrating with an organic dehydrating agent (such as acetic anhydride). The level of crystallinity in the linear aromatic polyimide so produced is maximized without any degradation in the molecular weight thereof.
Nonparametric adaptive estimation of linear functionals for low frequency observed Lévy processes
Kappus, Johanna
2012-01-01
For a Lévy process X having finite variation on compact sets and finite first moments, Âµ( dx) = xv( dx) is a finite signed measure which completely describes the jump dynamics. We construct kernel estimators for linear functionals of Âµ and provide rates of convergence under regularity assumptions. Moreover, we consider adaptive estimation via model selection and propose a new strategy for the data driven choice of the smoothing parameter.
2014-04-11
Carpenter Custom 465 precipitation-hardened martensitic stainless steel to develop a linear friction welding (LFW) process model for this material...Model for Carpenter Custom 465 Precipitation-Hardened Martensitic Stainless Steel The views, opinions and/or findings contained in this report are... Martensitic Stainless Steel Report Title An Arbitrary Lagrangian-Eulerian finite-element analysis is combined with thermo-mechanical material
Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos
Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.
2018-04-01
It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.
Watanabe, Junji; Akashi, Mitsuru
2008-01-01
Biointerfaces are crucial for regulating biofunctions. An effective method of producing new biomaterials is surface modification, in particular, the hybrid organic-inorganic approach. In this paper, we propose a method for the sequential formation of hydroxyapatite and calcium carbonate on porous polyester membranes by using an improved alternate soaking process. The resulting hybrid membranes were characterized in terms of their calcium and phosphorus ion contents; further, their structure was analyzed by scanning electron microscopy (SEM), X-ray diffraction (XRD), and infrared spectroscopy (IR). As a typical biofunction, protein adsorption by these hybrid membranes was investigated. Sequential hydroxyapatite and calcium carbonate formation on the membranes was successfully achieved, and the total amounts of hydroxyapatite and calcium carbonate formed were precisely regulated by the preparative conditions. The SEM and XRD characterizations were verified by comparing with the IR results. The amount of adsorbed protein correlated well with not only the amount of hydroxyapatite formed but also the combined amounts of hydroxyapatite and calcium carbonate formed. The results indicate that the hybrid membranes can function as high-performance biointerfaces that are capable of loading biomolecules such as proteins
Shahzad, Muhammad Ikram; Qureshi, Imtinan Elahi; Manzoor, Shahid; Khan, Hameed Ahmed
1999-01-01
The evidence of sequential fission has been found in the heavy-ion reaction (16.7 MeV/u) 238 U + nat. Au, using muscovite mica as Dielectric Track Detector (DTD) placed in a 2π-geometry configuration. The reaction products originating from the interactions of 238 U ions with the atoms of gold were registered in the detector in the form of tracks and identified for performing a detailed kinematical analysis. For this purpose the spherical polar coordinates of the correlated tracks of the multipronged events have been analyzed on an event-by-event basis. Automatic, semi-automatic and manual measuring methods have been employed to collect and manipulate the track data. The known characteristics of binary and ternary events observed in the reaction have been used for the calibration of the detectors. The computed masses, Q-values and relative velocities of the reaction products determined in this analysis are compared with theoretical predictions based on sequential fission process. Agreement within one standard deviation with respect to the experimental values has been found for the majority of analyzed events. Therefore, it is concluded that three particles in the exit channel of the reaction are produced in two successive steps. In the first step of the reaction, two intermediate nuclei are formed as a result of an inelastic collision between projectile and target atoms while in the second step the fission of one of the intermediate nuclei of the previous step takes place. Furthermore no proximity effects have been observed
N. Jaya
2008-10-01
Full Text Available In this work, a design and implementation of a Conventional PI controller, single region fuzzy logic controller, two region fuzzy logic controller and Globally Linearized Controller (GLC for a two capacity interacting nonlinear process is carried out. The performance of this process using single region FLC, two region FLC and GLC are compared with the performance of conventional PI controller about an operating point of 50 %. It has been observed that GLC and two region FLC provides better performance. Further, this procedure is also validated by real time experimentation using dSPACE.
Yoshida, Toshihiro
1981-01-01
Probabilities of meson production in the sequential decay of Reggeons, which are formed from the projectile and the target in the hadron-hadron to Reggeon-Reggeon processes, are investigated. It is assumed that pair creation of heavy quarks and simultaneous creation of two antiquark-quark pairs are negligible. The leading-order terms with respect to ratio of creation probabilities of anti s s to anti u u (anti d d) are calculated. The production cross sections in the target fragmentation region are given in terms of probabilities in the initial decay of the Reggeons and an effect of manyparticle production. (author)
Generalization of the Wide-Sense Markov Concept to a Widely Linear Processing
Espinosa-Pulido, Juan Antonio; Navarro-Moreno, Jesús; Fernández-Alcalá, Rosa María; Ruiz-Molina, Juan Carlos; Oya-Lechuga, Antonia; Ruiz-Fuentes, Nuria
2014-01-01
In this paper we show that the classical definition and the associated characterizations of wide-sense Markov (WSM) signals are not valid for improper complex signals. For that, we propose an extension of the concept of WSM to a widely linear (WL) setting and the study of new characterizations. Specifically, we introduce a new class of signals, called widely linear Markov (WLM) signals, and we analyze some of their properties based either on second-order properties or on state-space models from a WL processing standpoint. The study is performed in both the forwards and backwards directions of time. Thus, we provide two forwards and backwards Markovian representations for WLM signals. Finally, different estimation recursive algorithms are obtained for these models
Quasi-free and sequential processes in the 9Be(3He,αα)4He reaction at 2,8 MeV
Barbarino, S.; Lattuada, M.; Riggi, F.; Spitaleri, C.; Vinciguerra, D.
1979-01-01
The quasi-free contribution to the 9 Be( 3 He,αα) 4 He reaction at low incident energy is studied under various detection configurations. The measurements performed at quasi-free angles (symmetrical and non-symmetrical configurations) furter confirm that a large region in the energy spectra is dominated by the quasi-free 5 He( 3 He,α) 4 He process. From a comparison of these results with those taken at symmetrical angles where the minimum accepted momentum of the spectator is around 70 MeV/c, the conclusion is drawn that no significant distortion is introduced by the sequential peak tails in the deduced 5 He- 4 He momentum distribution
Vijayan, S.; Wong, C.F.; Buckley, L.P.
1994-11-22
In processes of this invention aqueous waste solutions containing a variety of mixed waste contaminants are treated to remove the contaminants by a sequential addition of chemicals and adsorption/ion exchange powdered materials to remove the contaminants including lead, cadmium, uranium, cesium-137, strontium-85/90, trichloroethylene and benzene, and impurities including iron and calcium. Staged conditioning of the waste solution produces a polydisperse system of size enlarged complexes of the contaminants in three distinct configurations: water-soluble metal complexes, insoluble metal precipitation complexes, and contaminant-bearing particles of ion exchange and adsorbent materials. The volume of the waste is reduced by separation of the polydisperse system by cross-flow microfiltration, followed by low-temperature evaporation and/or filter pressing. The water produced as filtrate is discharged if it meets a specified target water quality, or else the filtrate is recycled until the target is achieved. 1 fig.
Słania J.
2014-10-01
Full Text Available The article presents the process of production of coated electrodes and their welding properties. The factors concerning the welding properties and the currently applied method of assessing are given. The methodology of the testing based on the measuring and recording of instantaneous values of welding current and welding arc voltage is discussed. Algorithm for creation of reference data base of the expert system is shown, aiding the assessment of covered electrodes welding properties. The stability of voltage–current characteristics was discussed. Statistical factors of instantaneous values of welding current and welding arc voltage waveforms used for determining of welding process stability are presented. The results of coated electrodes welding properties are compared. The article presents the results of linear regression as well as the impact of the independent variables on the welding process performance. Finally the conclusions drawn from the research are given.
Dynamic actuation of a novel laser-processed NiTi linear actuator
Pequegnat, A; Daly, M; Wang, J; Zhou, Y; Khan, M I
2012-01-01
A novel laser processing technique, capable of locally modifying the shape memory effect, was applied to enhance the functionality of a NiTi linear actuator. By altering local transformation temperatures, an additional memory was imparted into a monolithic NiTi wire to enable dynamic actuation via controlled resistive heating. Characterizations of the actuator load, displacement and cyclic properties were conducted using a custom-built spring-biased test set-up. Monotonic tensile testing was also implemented to characterize the deformation behaviour of the martensite phase. Observed differences in the deformation behaviour of laser-processed material were found to affect the magnitude of the active strain. Furthermore, residual strain during cyclic actuation testing was found to stabilize after 150 cycles while the recoverable strain remained constant. This laser-processed actuator will allow for the realization of new applications and improved control methods for shape memory alloys. (paper)
Varadarajan, Divya; Haldar, Justin P
2017-11-01
The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.
Linear Mathematical Model for Seam Tracking with an Arc Sensor in P-GMAW Processes.
Liu, Wenji; Li, Liangyu; Hong, Ying; Yue, Jianfeng
2017-03-14
Arc sensors have been used in seam tracking and widely studied since the 80s and commercial arc sensing products for T and V shaped grooves have been developed. However, it is difficult to use these arc sensors in narrow gap welding because the arc stability and sensing accuracy are not satisfactory. Pulse gas melting arc welding (P-GMAW) has been successfully applied in narrow gap welding and all position welding processes, so it is worthwhile to research P-GMAW arc sensing technology. In this paper, we derived a linear mathematical P-GMAW model for arc sensing, and the assumptions for the model are verified through experiments and finite element methods. Finally, the linear characteristics of the mathematical model were investigated. In torch height changing experiments, uphill experiments, and groove angle changing experiments the P-GMAW arc signals all satisfied the linear rules. In addition, the faster the welding speed, the higher the arc signal sensitivities; the smaller the groove angle, the greater the arc sensitivities. The arc signal variation rate needs to be modified according to the welding power, groove angles, and weaving or rotate speed.
Rate-Independent Processes with Linear Growth Energies and Time-Dependent Boundary Conditions
Kružík, Martin; Zimmer, J.
2012-01-01
Roč. 5, č. 3 (2012), s. 591-604 ISSN 1937-1632 R&D Projects: GA AV ČR IAA100750802 Grant - others:GA ČR(CZ) GAP201/10/0357 Institutional research plan: CEZ:AV0Z10750506 Keywords : concentrations * oscillations * time - dependent boundary conditions * rate-independent evolution Subject RIV: BA - General Mathematics http://library.utia.cas.cz/separaty/2011/MTR/kruzik-rate-independent processes with linear growth energies and time - dependent boundary conditions.pdf
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)
2016-08-15
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Shahzad, Muhammad Ikram; Qureshi, Imtinan Elahi; Manzoor, Shahid; Khan, Hameed Ahmed
1999-01-04
The evidence of sequential fission has been found in the heavy-ion reaction (16.7 MeV/u) {sup 238}U + {sup nat.}Au, using muscovite mica as Dielectric Track Detector (DTD) placed in a 2{pi}-geometry configuration. The reaction products originating from the interactions of {sup 238}U ions with the atoms of gold were registered in the detector in the form of tracks and identified for performing a detailed kinematical analysis. For this purpose the spherical polar coordinates of the correlated tracks of the multipronged events have been analyzed on an event-by-event basis. Automatic, semi-automatic and manual measuring methods have been employed to collect and manipulate the track data. The known characteristics of binary and ternary events observed in the reaction have been used for the calibration of the detectors. The computed masses, Q-values and relative velocities of the reaction products determined in this analysis are compared with theoretical predictions based on sequential fission process. Agreement within one standard deviation with respect to the experimental values has been found for the majority of analyzed events. Therefore, it is concluded that three particles in the exit channel of the reaction are produced in two successive steps. In the first step of the reaction, two intermediate nuclei are formed as a result of an inelastic collision between projectile and target atoms while in the second step the fission of one of the intermediate nuclei of the previous step takes place. Furthermore no proximity effects have been observed.
Arango P, C.
1993-01-01
In this article the results of the investigation on the processes of adsorption, hydrolysis and consumption of COD (chemical oxygen demand) in both aerobic and anaerobic reactors to laboratory scale, their relationship with the conditions of illumination, half of support and concentration of oxygen, and their possible application in aerobic post-treatment of anaerobic leachates are presented. The investigation consists of an experimental assembly and a theoretical development of search of descriptor equations of the global process, and rates of occurrence of the particular processes. The experimental assembly was carried out with four reactors to laboratory scale subjected to different conditions of light, half of support and concentration of oxygen; it had two phases: one of evaluation of the effect of the different conditions in the efficiency of the reactors, and another of evaluation of the kinetic constants in the reactor of better acting and their application in aerobic treatment of anaerobic leachates
Synthetic Aperture Sequential Beamforming
Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke
2008-01-01
A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective is to im......A synthetic aperture focusing (SAF) technique denoted Synthetic Aperture Sequential Beamforming (SASB) suitable for 2D and 3D imaging is presented. The technique differ from prior art of SAF in the sense that SAF is performed on pre-beamformed data contrary to channel data. The objective...... is to improve and obtain a more range independent lateral resolution compared to conventional dynamic receive focusing (DRF) without compromising frame rate. SASB is a two-stage procedure using two separate beamformers. First a set of Bmode image lines using a single focal point in both transmit and receive...... is stored. The second stage applies the focused image lines from the first stage as input data. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The performance of SASB with a static image object is compared with DRF...
A non-linear decision making process for public involvement in environmental management activities
Harper, M.R.; Kastenberg, W.
1995-01-01
The international industrial and governmental institutions involved in radioactive waste management and environmental remediation are now entering a new era in which they must significantly expand public involvement. Thus the decision making processes formerly utilized to direct and guide these institutions must now be shifted to take into consideration the needs of many more stakeholders than ever before. To meet this challenge, they now have the job of developing and creating a new set of accurate, sufficient and continuous self-regulating and self-correcting information pathways between themselves and the many divergent stakeholder groups in order to establish sustainable, trusting and respectful relationships. In this paper the authors introduce a new set of non-linear, practical and effective strategies for interaction. These self-regulating strategies provide timely feedback to a system, establishing trust and creating a viable vehicle for staying open and responsive to the needs out of which change and balanced adaptation can continually emerge for all stakeholders. The authors present a decision making process for public involvement which is congruent with the non-linear ideas of holographic and fractal relationships -- the mutual influence between related parts of the whole and the self-symmetry of systems at every level of complexity
Linear and nonlinear post-processing of numerically forecasted surface temperature
M. Casaioli
2003-01-01
Full Text Available In this paper we test different approaches to the statistical post-processing of gridded numerical surface air temperatures (provided by the European Centre for Medium-Range Weather Forecasts onto the temperature measured at surface weather stations located in the Italian region of Puglia. We consider simple post-processing techniques, like correction for altitude, linear regression from different input parameters and Kalman filtering, as well as a neural network training procedure, stabilised (i.e. driven into the absolute minimum of the error function over the learning set by means of a Simulated Annealing method. A comparative analysis of the results shows that the performance with neural networks is the best. It is encouraging for systematic use in meteorological forecast-analysis service operations.
Goodman, Roe W
2016-01-01
This textbook for undergraduate mathematics, science, and engineering students introduces the theory and applications of discrete Fourier and wavelet transforms using elementary linear algebra, without assuming prior knowledge of signal processing or advanced analysis.It explains how to use the Fourier matrix to extract frequency information from a digital signal and how to use circulant matrices to emphasize selected frequency ranges. It introduces discrete wavelet transforms for digital signals through the lifting method and illustrates through examples and computer explorations how these transforms are used in signal and image processing. Then the general theory of discrete wavelet transforms is developed via the matrix algebra of two-channel filter banks. Finally, wavelet transforms for analog signals are constructed based on filter bank results already presented, and the mathematical framework of multiresolution analysis is examined.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Bordeaux INP, IMB, UMR CNRS 5251 (France); Piunovskiy, A. B., E-mail: piunov@liv.ac.uk [University of Liverpool, Department of Mathematical Sciences (United Kingdom)
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.
The Application of Linear and Nonlinear Water Tanks Case Study in Teaching of Process Control
Li, Xiangshun; Li, Zhiang
2018-02-01
In the traditional process control teaching, the importance of passing knowledge is emphasized while the development of creative and practical abilities of students is ignored. Traditional teaching methods are not very helpful to breed a good engineer. Case teaching is a very useful way to improve students’ innovative and practical abilities. In the traditional case teaching, knowledge points are taught separately based on different examples or no examples, thus it is very hard to setup the whole knowledge structure. Though all the knowledge is learned, how to use the knowledge to solve engineering problems keeps challenging for students. In this paper, the linear and nonlinear tanks are taken as illustrative examples which involves several knowledge points of process control. The application method of each knowledge point is discussed in detail and simulated. I believe the case-based study will be helpful for students.
Wang, S.; Ancell, B. C.; Huang, G. H.; Baetz, B. W.
2018-03-01
Data assimilation using the ensemble Kalman filter (EnKF) has been increasingly recognized as a promising tool for probabilistic hydrologic predictions. However, little effort has been made to conduct the pre- and post-processing of assimilation experiments, posing a significant challenge in achieving the best performance of hydrologic predictions. This paper presents a unified data assimilation framework for improving the robustness of hydrologic ensemble predictions. Statistical pre-processing of assimilation experiments is conducted through the factorial design and analysis to identify the best EnKF settings with maximized performance. After the data assimilation operation, statistical post-processing analysis is also performed through the factorial polynomial chaos expansion to efficiently address uncertainties in hydrologic predictions, as well as to explicitly reveal potential interactions among model parameters and their contributions to the predictive accuracy. In addition, the Gaussian anamorphosis is used to establish a seamless bridge between data assimilation and uncertainty quantification of hydrologic predictions. Both synthetic and real data assimilation experiments are carried out to demonstrate feasibility and applicability of the proposed methodology in the Guadalupe River basin, Texas. Results suggest that statistical pre- and post-processing of data assimilation experiments provide meaningful insights into the dynamic behavior of hydrologic systems and enhance robustness of hydrologic ensemble predictions.
Biased lineups: sequential presentation reduces the problem.
Lindsay, R C; Lea, J A; Nosworthy, G J; Fulford, J A; Hector, J; LeVan, V; Seabrook, C
1991-12-01
Biased lineups have been shown to increase significantly false, but not correct, identification rates (Lindsay, Wallbridge, & Drennan, 1987; Lindsay & Wells, 1980; Malpass & Devine, 1981). Lindsay and Wells (1985) found that sequential lineup presentation reduced false identification rates, presumably by reducing reliance on relative judgment processes. Five staged-crime experiments were conducted to examine the effect of lineup biases and sequential presentation on eyewitness recognition accuracy. Sequential lineup presentation significantly reduced false identification rates from fair lineups as well as from lineups biased with regard to foil similarity, instructions, or witness attire, and from lineups biased in all of these ways. The results support recommendations that police present lineups sequentially.
Linear and quadratic models of point process systems: contributions of patterned input to output.
Lindsay, K A; Rosenberg, J R
2012-08-01
In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.
Kang, Kun-Young; Lee, Young-Gi; Shin, Dong Ok; Kim, Jin-Chul; Kim, Kwang Man
2014-01-01
A pouch-type flexible thin-film lithium-ion battery is fabricated by sequential screen-printing (wet) processes to produce consecutive layers of a current collector, positive and negative electrodes, and a gel polymer electrolyte. Optimum conditions of each process are determined by adjusting the paste or slurry compositions to achieve lower surface resistance of each layer (current collector and electrodes) and higher ionic conductivity of the gel polymer electrolyte. The fabricated flexible thin-film lithium-ion battery (5.5 × 5.5 cm 2 , 325 μm thick) shows superior electrochemical performance, including an energy density of 292.3 Wh L −1 based on electrode size (4.0 × 4.0 cm 2 ), an initial discharge capacity of 2.5 mAh cm −2 per electrode area, and capacity retention ratio of over 68% at the 50th cycle. To further improve the battery performance, the wet processes are modified by adopting hybrid (dry-wet) processes, which mainly consist of the formation of metallic current collector layers (Al and Cu) using a thermal evaporator and another optimized gel polymer electrolyte, to achieve an energy density of 332.8 Wh L −1 and capacity retention ratio of 84% at the 50th cycle. Cell flexibility is also confirmed by stable open circuit voltages after the system is subjected to several hundred iterations of bending, stretching, and even folding. There is the possibility that the suggested wet and dry-wet processes can be expanded to a high-speed mass-production roll-to-roll process
Montiel Corona, Virginia; Razo-Flores, Elías
2018-02-01
Continuous H 2 and CH 4 production in a two-stage process to increase energy recovery from agave bagasse enzymatic-hydrolysate was studied. In the first stage, the effect of organic loading rate (OLR) and stirring speed on volumetric hydrogen production rate (VHPR) was evaluated in a continuous stirred tank reactor (CSTR); by controlling the homoacetogenesis with the agitation speed and maintaining an OLR of 44 g COD/L-d, it was possible to reach a VHPR of 6 L H 2 /L-d, equivalent to 1.34 kJ/g bagasse. In the second stage, the effluent from CSTR was used as substrate to feed a UASB reactor for CH 4 production. Volumetric methane production rate (VMPR) of 6.4 L CH 4 /L-d was achieved with a high OLR (20 g COD/L-d) and short hydraulic retention time (HRT, 14 h), producing 225 mL CH 4 /g-bagasse equivalent to 7.88 kJ/g bagasse. The two-stage continuous process significantly increased energy conversion efficiency (56%) compared to one-stage hydrogen production (8.2%). Copyright © 2017 Elsevier Ltd. All rights reserved.
Sahn, James J; Granger, Brett A; Martin, Stephen F
2014-10-21
A strategy for generating diverse collections of small molecules has been developed that features a multicomponent assembly process (MCAP) to efficiently construct a variety of intermediates possessing an aryl aminomethyl subunit. These key compounds are then transformed via selective ring-forming reactions into heterocyclic scaffolds, each of which possesses suitable functional handles for further derivatizations and palladium-catalyzed cross coupling reactions. The modular nature of this approach enables the facile construction of libraries of polycyclic compounds bearing a broad range of substituents and substitution patterns for biological evaluation. Screening of several compound libraries thus produced has revealed a large subset of compounds that exhibit a broad spectrum of medicinally-relevant activities.
Demir, Aydeniz; Köleli, Nurcan
2013-01-01
A two-step method for the remediation of three different types of lead (Pb)-contaminated soil was evaluated. The first step included soil washing with ethylenediaminetetraacetic acid (EDTA) to remove Pb from soils. The washing experiments were performed with 0.05 M Na2EDTA at 1:10 soil to liquid ratio. Following the washing, Pb removal efficiency from soils ranged within 50-70%. After the soil washing process, Pb2+ ions in the washing solution were reduced electrochemically in a fixed-bed reactor. Lead removal efficiency with the electrochemical reduction at -2.0 V potential ranged within 57-76%. The overall results indicate that this two-step method is an environmentally-friendly and effective technology to remediate Pb-contaminated soils, as well as Pb-contaminated wastewater treatment due to the transformation of toxic Pb2+ ions into a non-hazardous metallic form (Pb(0)).
Bressani, Ana Paula P; Garcia, Karen C A; Hirata, Daniela B; Mendes, Adriano A
2015-02-01
The present study deals with the enzymatic synthesis of alkyl esters with emollient properties by a sequential hydrolysis/esterification process (hydroesterification) using unrefined macaw palm oil from pulp seeds (MPPO) as feedstock. Crude enzymatic extract from dormant castor bean seeds was used as biocatalyst in the production of free fatty acids (FFA) by hydrolysis of MPPO. Esterification of purified FFA with several alcohols in heptane medium was catalyzed by immobilized Thermomyces lanuginosus lipase (TLL) on poly-hydroxybutyrate (PHB) particles. Under optimal experimental conditions (mass ratio oil:buffer of 35% m/m, reaction temperature of 35 °C, biocatalyst concentration of 6% m/m, and stirring speed of 1,000 rpm), complete hydrolysis of MPPO was reached after 110 min of reaction. Maximum ester conversion percentage of 92.4 ± 0.4% was reached using hexanol as acyl acceptor at 750 mM of each reactant after 15 min of reaction. The biocatalyst retained full activity after eight successive cycles of esterification reaction. These results show that the proposed process is a promising strategy for the synthesis of alkyl esters of industrial interest from macaw palm oil, an attractive option for the Brazilian oleochemical industry.
Shafiee, Alireza
2016-09-24
A theoretical model for multi-tubular palladium-based membrane is proposed in this paper and validated against experimental data for two different sized membrane modules that operate at high temperatures. The model is used in a sequential simulation format to describe and analyse pure hydrogen and hydrogen binary mixture separations, and then extended to simulate an industrial scale membrane unit. This model is used as a sub-routine within an ASPEN Plus model to simulate a membrane reactor in a steam reforming hydrogen production plant. A techno-economic analysis is then conducted using the validated model for a plant producing 300 TPD of hydrogen. The plant utilises a thin (2.5 μm) defect-free and selective layer (Pd75Ag25 alloy) membrane reactor. The economic sensitivity analysis results show usefulness in finding the optimum operating condition that achieves minimum hydrogen production cost at break-even point. A hydrogen production cost of 1.98 $/kg is estimated while the cost of the thin-layer selective membrane is found to constitute 29% of total process capital cost. These results indicate the competiveness of this thin-layer membrane process against conventional methods of hydrogen production. © 2016 Hydrogen Energy Publications LLC
Neumann, Patricio; Barriga, Felipe; Álvarez, Claudia; González, Zenón; Vidal, Gladys
2018-03-15
The aim of this study was to evaluate the performance and digestate quality of advanced anaerobic digestion of sewage sludge including sequential ultrasound-thermal (55 °C) pre-treatment. Both stages of pre-treatment contributed to chemical oxygen demand (COD) solubilization, with an overall factor of 11.4 ± 2.2%. Pre-treatment led to 19.1, 24.0 and 29.9% increased methane yields at 30, 15 and 7.5 days solid retention times (SRT), respectively, without affecting process stability or accumulation of intermediates. Pre-treatment decreased up to 4.2% water recovery from the digestate, but SRT was a more relevant factor controlling dewatering. Advanced digestion showed 2.4-3.1 and 1.5 logarithmic removals of coliforms and coliphages, respectively, and up to a 58% increase in the concentration of inorganics in the digestate solids compared to conventional digestion. The COD balance of the process showed that the observed increase in methane production was proportional to the pre-treatment solubilization efficiency. Copyright © 2018 Elsevier Ltd. All rights reserved.
NONLINEAR REFLECTION PROCESS OF LINEARLY POLARIZED, BROADBAND ALFVÉN WAVES IN THE FAST SOLAR WIND
Shoda, M.; Yokoyama, T., E-mail: shoda@eps.s.u-tokyo.ac.jp [Department of Earth and Planetary Science, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033 (Japan)
2016-04-01
Using one-dimensional numerical simulations, we study the elementary process of Alfvén wave reflection in a uniform medium, including nonlinear effects. In the linear regime, Alfvén wave reflection is triggered only by the inhomogeneity of the medium, whereas in the nonlinear regime, it can occur via nonlinear wave–wave interactions. Such nonlinear reflection (backscattering) is typified by decay instability. In most studies of decay instabilities, the initial condition has been a circularly polarized Alfvén wave. In this study we consider a linearly polarized Alfvén wave, which drives density fluctuations by its magnetic pressure force. For generality, we also assume a broadband wave with a red-noise spectrum. In the data analysis, we decompose the fluctuations into characteristic variables using local eigenvectors, thus revealing the behaviors of the individual modes. Different from the circular-polarization case, we find that the wave steepening produces a new energy channel from the parent Alfvén wave to the backscattered one. Such nonlinear reflection explains the observed increasing energy ratio of the sunward to the anti-sunward Alfvénic fluctuations in the solar wind with distance against the dynamical alignment effect.
Zhao, Zilong; Liu, Zekun; Wang, Hongjie; Dong, Wenyi; Wang, Wei
2018-07-01
Treatment of Ni-EDTA in industrial nickel plating effluents was investigated by integrated application of Fenton and ozone-based oxidation processes. Determination of integrated sequence found that Fenton oxidation presented higher apparent kinetic rate constant of Ni-EDTA oxidation and capacity for contamination load than ozone-based oxidation process, the latter, however, was favorable to guarantee the further mineralization of organic substances, especially at a low concentration. Serial-connection mode of two oxidation processes was appraised, Fenton effluent after treated by hydroxide precipitation and filtration negatively affected the overall performance of the sequential system, as evidenced by the removal efficiencies of Ni 2+ and TOC dropping from 99.8% to 98.7%, and from 74.8% to 66.6%, respectively. As a comparison, O 3 /Fe 2+ oxidation process was proved to be more effective than other processes (e.g. O 3 -Fe 2+ , O 3 /H 2 O 2 /Fe 2+ , O 3 /H 2 O 2 -Fe 2+ ), and the final effluent Ni 2+ concentration could satisfied the discharge standard (Fenton reaction, initial influent pH of 3.0, O 3 dosage of 252 mg L -1 , Fe 2+ of 150 mg L -1 , and reaction time of 30 min for O 3 /Fe 2+ oxidation). Furthermore, pilot-scale test was carried out to study the practical treatability towards the real nickel plating effluent, revealing the effective removal of some other co-existence contaminations. And Fenton reaction has contributed most, with the percentage ranging from 72.41% to 93.76%. The economic cost advantage made it a promising alternative to the continuous Fenton oxidation. Copyright © 2018 Elsevier Ltd. All rights reserved.
Delvasto, P.; Orta Rodríguez, R.; Blanco, S.
2016-02-01
Rechargeable Ni-MH batteries contain strategic metal values which are worth to be recovered. In the present work, a preliminary sequential chemical and electrochemical procedure is proposed, in order to reclaim materials bearing Ni, Co and rare earth elements (REE) from Ni-MH spent batteries. Initially, spent batteries are disassembled to separate the electrode materials (anode and cathode), which are then leached with an aqueous solution of 5w% sulphuric acid. The metal content of this solution is checked by atomic absorption spectrometry techniques. The obtained solution is pH-adjusted (with NaOH), until pH is between 4.0 and 4.3; then, it is heated up to 70°C to precipitate a rare earth elements sulphate (Nd, La, Pr, Ce), as determined by means of x-ray fluorescence techniques. The solids-free solution is then electrolyzed, in order to recover a Ni-Co alloy. The electrolysis conditions were established through a cyclic voltammetry technique.
Imitation learning of Non-Linear Point-to-Point Robot Motions using Dirichlet Processes
Krüger, Volker; Tikhanoff, Vadim; Natale, Lorenzo
2012-01-01
In this paper we discuss the use of the infinite Gaussian mixture model and Dirichlet processes for learning robot movements from demonstrations. Starting point of this work is an earlier paper where the authors learn a non-linear dynamic robot movement model from a small number of observations....... The model in that work is learned using a classical finite Gaussian mixture model (FGMM) where the Gaussian mixtures are appropriately constrained. The problem with this approach is that one needs to make a good guess for how many mixtures the FGMM should use. In this work, we generalize this approach...... our algorithm on the same data that was used in [5], where the authors use motion capture devices to record the demonstrations. As further validation we test our approach on novel data acquired on our iCub in a different demonstration scenario in which the robot is physically driven by the human...
A new formulation of the linear sampling method: spatial resolution and post-processing
Piana, M; Aramini, R; Brignone, M; Coyle, J
2008-01-01
A new formulation of the linear sampling method is described, which requires the regularized solution of a single functional equation set in a direct sum of L 2 spaces. This new approach presents the following notable advantages: it is computationally more effective than the traditional implementation, since time consuming samplings of the Tikhonov minimum problem and of the generalized discrepancy equation are avoided; it allows a quantitative estimate of the spatial resolution achievable by the method; it facilitates a post-processing procedure for the optimal selection of the scatterer profile by means of edge detection techniques. The formulation is described in a two-dimensional framework and in the case of obstacle scattering, although generalizations to three dimensions and penetrable inhomogeneities are straightforward
Kamphaus, Randy W.; And Others
The development of two types of mental processing (sequential and simultaneous) in preschool and elementary children was examined in this study. Specifically, the aims of the study were to develop a revised set of tasks based upon previous findings (Naglieri, Kaufman, Kaufman, & Kamphaus, 1981; Kaufman, Kaufman, Kamphaus, & Naglieri, in…
Technical training seminar: Data Converters and Linear Products for Signal Processing and Control
Davide Vitè
2006-01-01
Monday 23 January 2006 TECHNICAL TRAINING SEMINAR from 14:00 to 17:30, Training Centre Auditorium (bldg. 503) Data Converters and Linear Products for Signal Processing and Control Marco Corsi, William Bright, Olrik Maier, Andrea Huder / TEXAS INSTRUMENTS (US, D, CH) Texas Instruments will present recent technology advances in design and manufacturing of A/D and D/A converters, and of operational amplifiers. 14:00 - 15:30 HIGH SPEED - Technology and the new process BiCom3: High speed ADCs, DACs, operational amplifiers 15:30 - 15:45 coffee 15:45 - 17:15 HIGH PRECISION - Technology and the new process HPA07: High precision ADCs, DACs, operational amplifiers questions, discussion Industrial partners: Robert Medioni, François Caloz Spoerle Electronic, CH-1440 Montagny (VD), Switzerland Phone: + 41 24 447 01 37, email: RMedioni@spoerle.com, http://www.spoerle.com Language: English. Free seminar (no registration). Organiser: Davide Vitè / HR-PMD-ATT / 75141 For more information, visit the Te...
Parvatkar, P.T.; Kadam, H.K.; Tilve, S.G.
This review summarizes the recent examples of tandem or sequential reactions used for the last 10 years for the synthesis of fused and bridged bicyclic or polycyclic compounds with IMDA cycloaddition as the key means to access these compounds...
Orhan Dengiz
2018-01-01
Full Text Available Land evaluation analysis is a prerequisite to achieving optimum utilization of the available land resources. Lack of knowledge on best combination of factors that suit production of yields has contributed to the low production. The aim of this study was to determine the most suitable areas for agricultural uses. For that reasons, in order to determine land suitability classes of the study area, multi-criteria approach was used with linear combination technique and analytical hierarchy process by taking into consideration of some land and soil physico-chemical characteristic such as slope, texture, depth, derange, stoniness, erosion, pH, EC, CaCO3 and organic matter. These data and land mapping unites were taken from digital detailed soil map scaled as 1:5.000. In addition, in order to was produce land suitability map GIS was program used for the study area. This study was carried out at Mahmudiye, Karaamca, Yazılı, Çiçeközü, Orhaniye and Akbıyık villages in Yenişehir district of Bursa province. Total study area is 7059 ha. 6890 ha of total study area has been used as irrigated agriculture, dry farming agriculture, pasture while, 169 ha has been used for non-agricultural activities such as settlement, road water body etc. Average annual temperature and precipitation of the study area are 16.1oC and 1039.5 mm, respectively. Finally after determination of land suitability distribution classes for the study area, it was found that 15.0% of the study area has highly (S1 and moderately (S2 while, 85% of the study area has marginally suitable and unsuitable coded as S3 and N. It was also determined some relation as compared results of linear combination technique with other hierarchy approaches such as Land Use Capability Classification and Suitability Class for Agricultural Use methods.
Multi-agent sequential hypothesis testing
Kim, Kwang-Ki K.
2014-12-15
This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.
Pearson, Jeremy [Department of Chemical Engineering and Materials Science - University of California Irvine, 916 Engineering Tower, Irvine, CA, 92697 (United States); Miller, George [Department of Chemistry- University of California Irvine, 2046D PS II, Irvine, CA, 92697 (United States); Nilsson, Mikael [Department of Chemical Engineering and Materials Science - University of California Irvine, 916 Engineering Tower, Irvine, CA, 92697 (United States)
2013-07-01
Treatment of used nuclear fuel through solvent extraction separation processes is hindered by radiolytic damage from radioactive isotopes present in used fuel. The nature of the damage caused by the radiation may depend on the radiation type, whether it be low linear energy transfer (LET) such as gamma radiation or high LET such as alpha radiation. Used nuclear fuel contains beta/gamma emitting isotopes but also a significant amount of transuranics which are generally alpha emitters. Studying the respective effects on matter of both of these types of radiation will allow for accurate prediction and modeling of process performance losses with respect to dose. Current studies show that alpha radiation has milder effects than that of gamma. This is important to know because it will mean that solvent extraction solutions exposed to alpha radiation may last longer than expected and need less repair and replacement. These models are important for creating robust, predictable, and economical processes that have strong potential for mainstream adoption on the commercial level. The effects of gamma radiation on solvent extraction ligands have been more extensively studied than the effects of alpha radiation. This is due to the inherent difficulty in producing a sufficient and confluent dose of alpha particles within a sample without leaving the sample contaminated with long lived radioactive isotopes. Helium ion beam and radioactive isotope sources have been studied in the literature. We have developed a method for studying the effects of high LET radiation in situ via {sup 10}B activation and the high LET particles that result from the {sup 10}B(n,a){sup 7}Li reaction which follows. Our model for dose involves solving a partial differential equation representing absorption by 10B of an isentropic field of neutrons penetrating a sample. This method has been applied to organic solutions of TBP and CMPO, two ligands common in TRU solvent extraction treatment processes. Rates
Realization of beam polarization at the linear collider and its application to EW processes
Franco-Sollova, F.
2006-07-15
The use of beam polarization at the future ILC e{sup +}e{sup -} linear collider will benefit the physics program significantly. This thesis explores three aspects of beam polarization: the application of beam polarization to the study of electroweak processes, the precise measurement of the beam polarization, and finally, the production of polarized positrons at a test beam experiment. In the first part of the thesis the importance of beam polarization at the future ILC is exhibited: the benefits of employing transverse beam polarization (in both beams) for the measurement of triple gauge boson couplings (TGCs) in the W-pair production process are studied. The sensitivity to anomalous TGC values is compared for the cases of transverse and longitudinal beam polarization at a center of mass energy of 500 GeV. Due to the suppressed contribution of the t-channel {nu} exchange, the sensitivity is higher for longitudinal polarization. For some physics analyses the usual polarimetry techniques do not provide the required accuracy for the measurement of the beam polarization (around 0.25% with Compton polarimetry). The second part of the thesis deals with a complementary method to measure the beam polarization employing physics data acquired with two polarization modes. The process of single-W production is chosen due to its high cross section. The expected precision for 500 fb{sup -1} and W{yields}{mu}{nu} decays only, is {delta}P{sub e{sup -}}/P{sub e{sup -}}=0.26% and {delta}P{sub e{sup +}}/P{sub e{sup +}}=0.33%, which can be further improved by employing additional W-decay channels. The first results of an attempt to produce polarized positrons at the E-166 experiment are shown in the last part of the thesis. The E-166 experiment, located at the Final Focus Test Beam at SLAC's LINAC employs a helical undulator to induce the emission of circularly polarized gamma rays by the beam electrons. These gamma rays are converted into longitudinally polarized electron
Sequential charged particle reaction
Hori, Jun-ichi; Ochiai, Kentaro; Sato, Satoshi; Yamauchi, Michinori; Nishitani, Takeo
2004-01-01
The effective cross sections for producing the sequential reaction products in F82H, pure vanadium and LiF with respect to the 14.9-MeV neutron were obtained and compared with the estimation ones. Since the sequential reactions depend on the secondary charged particles behavior, the effective cross sections are corresponding to the target nuclei and the material composition. The effective cross sections were also estimated by using the EAF-libraries and compared with the experimental ones. There were large discrepancies between estimated and experimental values. Additionally, we showed the contribution of the sequential reaction on the induced activity and dose rate in the boundary region with water. From the present study, it has been clarified that the sequential reactions are of great importance to evaluate the dose rates around the surface of cooling pipe and the activated corrosion products. (author)
Primary processes in radiation chemistry. LET (Linear Energy Transfer) effect in water radiolysis
Trupin-Wasselin, V.
2000-01-01
The effect of ionizing radiations on aqueous solutions leads to water ionization and then to the formation of radical species and molecular products (e - aq , H . , OH . , H 2 O 2 , H 2 ). It has been shown that the stopping power, characterized by the LET value (Linear Energy Transfer) becomes different when the nature of the ionizing radiations is different. Few data are nowadays available for high LET radiations such as protons and high energy heavy ions. These particles have been used to better understand the primary processes in radiation chemistry. The yield of a chemical dosimeter (the Fricke dosimeter) and those of the hydrogen peroxide have been determined for different LET. The effect of the dose rate on the Fricke dosimeter yield and on the H 2 O 2 yield has been studied too. When the dose rate increases, an increase of the molecular products yield is observed. At very high dose rate, this yield decreases on account of the attack of the molecular products by radicals. The H 2 O 2 yield in alkaline medium decreases when the pH reaches 12. This decrease can be explained by a slowing down of the H 2 O 2 formation velocity in alkaline medium. Superoxide radical has also been studied in this work. A new detection method: the time-resolved chemiluminescence has been perfected for this radical. This technique is more sensitive than the absorption spectroscopy. Experiments with heavy ions have allowed to determine the O 2 .- yield directly in the irradiation cell. The experimental results have been compared with those obtained with a Monte Carlo simulation code. (O.M.)
Sequential Generalized Transforms on Function Space
Jae Gil Choi
2013-01-01
Full Text Available We define two sequential transforms on a function space Ca,b[0,T] induced by generalized Brownian motion process. We then establish the existence of the sequential transforms for functionals in a Banach algebra of functionals on Ca,b[0,T]. We also establish that any one of these transforms acts like an inverse transform of the other transform. Finally, we give some remarks about certain relations between our sequential transforms and other well-known transforms on Ca,b[0,T].
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
A Trust-region-based Sequential Quadratic Programming Algorithm
Henriksen, Lars Christian; Poulsen, Niels Kjølstad
This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints.......This technical note documents the trust-region-based sequential quadratic programming algorithm used in other works by the authors. The algorithm seeks to minimize a convex nonlinear cost function subject to linear inequalty constraints and nonlinear equality constraints....
Ken eKinjo
2013-04-01
Full Text Available Linearly solvable Markov Decision Process (LMDP is a class of optimal control problem in whichthe Bellman’s equation can be converted into a linear equation by an exponential transformation ofthe state value function (Todorov, 2009. In an LMDP, the optimal value function and the correspondingcontrol policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunctionproblem in a continuous state using the knowledge of the system dynamics and the action, state, andterminal cost functions.In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in whichthe dynamics of the body and the environment have to be learned from experience. We first perform asimulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynam-ics model on the derived the action policy. The result shows that a crude linear approximation of thenonlinear dynamics can still allow solution of the task, despite with a higher total cost.We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robotplatform. The state is given by the position and the size of a battery in its camera view and two neck jointangles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servocontroller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state costfunctions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics modelperformed equivalently with the optimal linear quadratic controller (LQR. In the non-quadratic task, theLMDP controller with a linear dynamics model showed the best performance. The results demonstratethe usefulness of the LMDP framework in real robot control even when simple linear models are usedfor dynamics learning.
Chattopadhyay, Anirban; Khondekar, Mofazzal Hossain; Bhattacharjee, Anup Kumar
2017-09-01
In this paper initiative has been taken to search the periodicities of linear speed of Coronal Mass Ejection in solar cycle 23. Double exponential smoothing and Discrete Wavelet Transform are being used for detrending and filtering of the CME linear speed time series. To choose the appropriate statistical methodology for the said purpose, Smoothed Pseudo Wigner-Ville distribution (SPWVD) has been used beforehand to confirm the non-stationarity of the time series. The Time-Frequency representation tool like Hilbert Huang Transform and Empirical Mode decomposition has been implemented to unearth the underneath periodicities in the non-stationary time series of the linear speed of CME. Of all the periodicities having more than 95% Confidence Level, the relevant periodicities have been segregated out using Integral peak detection algorithm. The periodicities observed are of low scale ranging from 2-159 days with some relevant periods like 4 days, 10 days, 11 days, 12 days, 13.7 days, 14.5 and 21.6 days. These short range periodicities indicate the probable origin of the CME is the active longitude and the magnetic flux network of the sun. The results also insinuate about the probable mutual influence and causality with other solar activities (like solar radio emission, Ap index, solar wind speed, etc.) owing to the similitude between their periods and CME linear speed periods. The periodicities of 4 days and 10 days indicate the possible existence of the Rossby-type waves or planetary waves in Sun.
Design and Implementation of a linear-phase equalizer in digital audio signal processing
Slump, Cornelis H.; van Asma, C.G.M.; Barels, J.K.P.; Barels, J.K.P.; Brunink, W.J.A; Drenth, F.B.; Pol, J.V.; Schouten, D.S.; Samsom, M.M.; Samsom, M.M.; Herrmann, O.E.
1992-01-01
This contribution presents the four phases of a project aiming at the realization in VLSI of a digital audio equalizer with a linear phase characteristic. The first step includes the identification of the system requirements, based on experience and (psycho-acoustical) literature. Secondly, the
Study of resolution and linearity in LaBr3: Ce scintillator through digital-pulse processing
Abhinav Kumar; Mishra, Gaurav; Ramachandran, K.
2014-01-01
Advent of digital pulse processing has led to a paradigm shift in pulse processing techniques by replacing analog electronics processing chain with equivalent algorithms acting on pulse profiles digitized at high sampling rates. In this paper, we have carried out offline digital pulse processing of Cerium-doped Lanthanum bromide scintillator (LaBr 3 : Ce) detector pulses, acquired using CAEN V1742 VME digitizer module. Algorithms have been written to approximate the functioning of peak sensing analog-to-digital convertor (ADC) and charge-to-digital convertor (QDC). Energy dependence of resolution and energy linearity of LaBr 3 : Ce scintillator detector has been studied by utilizing aforesaid algorithms
Garey, Lorra; Cheema, Mina K; Otal, Tanveer K; Schmidt, Norman B; Neighbors, Clayton; Zvolensky, Michael J
2016-10-01
Smoking rates are markedly higher among trauma-exposed individuals relative to non-trauma-exposed individuals. Extant work suggests that both perceived stress and negative affect reduction smoking expectancies are independent mechanisms that link trauma-related symptoms and smoking. Yet, no work has examined perceived stress and negative affect reduction smoking expectancies as potential explanatory variables for the relation between trauma-related symptom severity and smoking in a sequential pathway model. Methods The present study utilized a sample of treatment-seeking, trauma-exposed smokers (n = 363; 49.0% female) to examine perceived stress and negative affect reduction expectancies for smoking as potential sequential explanatory variables linking trauma-related symptom severity and nicotine dependence, perceived barriers to smoking cessation, and severity of withdrawal-related problems and symptoms during past quit attempts. As hypothesized, perceived stress and negative affect reduction expectancies had a significant sequential indirect effect on trauma-related symptom severity and criterion variables. Findings further elucidate the complex pathways through which trauma-related symptoms contribute to smoking behavior and cognitions, and highlight the importance of addressing perceived stress and negative affect reduction expectancies in smoking cessation programs among trauma-exposed individuals. (Am J Addict 2016;25:565-572). © 2016 American Academy of Addiction Psychiatry.
Efficient sequential and parallel algorithms for record linkage.
Mamun, Abdullah-Al; Mi, Tian; Aseltine, Robert; Rajasekaran, Sanguthevar
2014-01-01
Integrating data from multiple sources is a crucial and challenging problem. Even though there exist numerous algorithms for record linkage or deduplication, they suffer from either large time needs or restrictions on the number of datasets that they can integrate. In this paper we report efficient sequential and parallel algorithms for record linkage which handle any number of datasets and outperform previous algorithms. Our algorithms employ hierarchical clustering algorithms as the basis. A key idea that we use is radix sorting on certain attributes to eliminate identical records before any further processing. Another novel idea is to form a graph that links similar records and find the connected components. Our sequential and parallel algorithms have been tested on a real dataset of 1,083,878 records and synthetic datasets ranging in size from 50,000 to 9,000,000 records. Our sequential algorithm runs at least two times faster, for any dataset, than the previous best-known algorithm, the two-phase algorithm using faster computation of the edit distance (TPA (FCED)). The speedups obtained by our parallel algorithm are almost linear. For example, we get a speedup of 7.5 with 8 cores (residing in a single node), 14.1 with 16 cores (residing in two nodes), and 26.4 with 32 cores (residing in four nodes). We have compared the performance of our sequential algorithm with TPA (FCED) and found that our algorithm outperforms the previous one. The accuracy is the same as that of this previous best-known algorithm.
Jens G. Balchen
1984-10-01
Full Text Available The problem of systematic derivation of a quasi-dynamic optimal control strategy for a non-linear dynamic process based upon a non-quadratic objective function is investigated. The wellknown LQG-control algorithm does not lead to an optimal solution when the process disturbances have non-zero mean. The relationships between the proposed control algorithm and LQG-control are presented. The problem of how to constrain process variables by means of 'penalty' - terms in the objective function is dealt with separately.
Classical and sequential limit analysis revisited
Leblond, Jean-Baptiste; Kondo, Djimédo; Morin, Léo; Remmal, Almahdi
2018-04-01
Classical limit analysis applies to ideal plastic materials, and within a linearized geometrical framework implying small displacements and strains. Sequential limit analysis was proposed as a heuristic extension to materials exhibiting strain hardening, and within a fully general geometrical framework involving large displacements and strains. The purpose of this paper is to study and clearly state the precise conditions permitting such an extension. This is done by comparing the evolution equations of the full elastic-plastic problem, the equations of classical limit analysis, and those of sequential limit analysis. The main conclusion is that, whereas classical limit analysis applies to materials exhibiting elasticity - in the absence of hardening and within a linearized geometrical framework -, sequential limit analysis, to be applicable, strictly prohibits the presence of elasticity - although it tolerates strain hardening and large displacements and strains. For a given mechanical situation, the relevance of sequential limit analysis therefore essentially depends upon the importance of the elastic-plastic coupling in the specific case considered.
Sequential memory: Binding dynamics
Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail
2015-10-01
Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.
Sequential Dependencies in Driving
Doshi, Anup; Tran, Cuong; Wilder, Matthew H.; Mozer, Michael C.; Trivedi, Mohan M.
2012-01-01
The effect of recent experience on current behavior has been studied extensively in simple laboratory tasks. We explore the nature of sequential effects in the more naturalistic setting of automobile driving. Driving is a safety-critical task in which delayed response times may have severe consequences. Using a realistic driving simulator, we find…
Mining compressing sequential problems
Hoang, T.L.; Mörchen, F.; Fradkin, D.; Calders, T.G.K.
2012-01-01
Compression based pattern mining has been successfully applied to many data mining tasks. We propose an approach based on the minimum description length principle to extract sequential patterns that compress a database of sequences well. We show that mining compressing patterns is NP-Hard and
Ruiz Egea, E.; Sanchez Carrascal, M.; Torres Pozas, S.; Monja Ray, P. de la; Perez Molina, J. L.; Madan Rodriguez, C.; Luque Japon, L.; Morera Molina, A.; Hernandez Perez, A.; Barquero Bravo, Y.; Morengo Pedagna, I.; Oliva Gordillo, M. C.; Martin Olivar, R.
2011-01-01
In order to try to determine the high dose in the bunker of a Linear Accelerator clinical use trying to measure the spatial dependence of the sane f ron the isocenter to tite gateway to the Board ceeking to establich the origin of it. This doce measurements performed with an ionization charober at different locations incide the bunker after an irradiation of 400 Monitor Units verifying the doce rate per minute for an hour, and accumulating the doce received during that period of time.
Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu
2018-06-01
A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.
Zhaowei Xiang
2018-06-01
Full Text Available A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM. Keywords: Selective laser melting, Volume shrinkage, Powder-to-dense process, Numerical modeling, Thermal analysis, Linear energy density
E. D. Resende
2007-09-01
Full Text Available The freezing process is considered as a propagation problem and mathematically classified as an "initial value problem." The mathematical formulation involves a complex situation of heat transfer with simultaneous changes of phase and abrupt variation in thermal properties. The objective of the present work is to solve the non-linear heat transfer equation for food freezing processes using orthogonal collocation on finite elements. This technique has not yet been applied to freezing processes and represents an alternative numerical approach in this area. The results obtained confirmed the good capability of the numerical method, which allows the simulation of the freezing process in approximately one minute of computer time, qualifying its application in a mathematical optimising procedure. The influence of the latent heat released during the crystallisation phenomena was identified by the significant increase in heat load in the early stages of the freezing process.
Bleier, W.
1983-01-01
The polarization of the photons in the elementary processes of the electron-nucleus and electron-electron bremsstrahlung was measured. Electrons with an energy of 300 keV were scattered by copper, gold and carbon target. The polarization in the different processes was measured by using different coincidence methods. (BEF)
A linear program for optimal configurable business processes deployment into cloud federation
Rekik, M.; Boukadi, K.; Assy, N.; Gaaloul, W.; Ben-Abdallah, H.; Zhang, J.; Miller, J.A.; Xu, X.
2016-01-01
A configurable process model is a generic model from which an enterprise can derive and execute process variants that meet its specific needs and contexts. With the advent of cloud computing and its economic pay-per-use model, enterprises are increasingly outsourcing partially or totally their
A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes
Martin, Rodney Alexander
2009-01-01
In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I
Scanning Electron Microscope Calibration Using a Multi-Image Non-Linear Minimization Process
Cui, Le; Marchand, Éric
2015-04-01
A scanning electron microscope (SEM) calibrating approach based on non-linear minimization procedure is presented in this article. A part of this article has been published in IEEE International Conference on Robotics and Automation (ICRA), 2014. . Both the intrinsic parameters and the extrinsic parameters estimations are achieved simultaneously by minimizing the registration error. The proposed approach considers multi-images of a multi-scale calibration pattern view from different positions and orientations. Since the projection geometry of the scanning electron microscope is different from that of a classical optical sensor, the perspective projection model and the parallel projection model are considered and compared with distortion models. Experiments are realized by varying the position and the orientation of a multi-scale chessboard calibration pattern from 300× to 10,000×. The experimental results show the efficiency and the accuracy of this approach.
Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.
2018-01-01
current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...
Post-processing with linear optics for improving the quality of single-photon sources
Berry, Dominic W; Scheel, Stefan; Myers, Casey R; Sanders, Barry C; Knight, Peter L; Laflamme, Raymond
2004-01-01
Triggered single-photon sources produce the vacuum state with non-negligible probability, but produce a much smaller multiphoton component. It is therefore reasonable to approximate the output of these photon sources as a mixture of the vacuum and single-photon states. We show that it is impossible to increase the probability for a single photon using linear optics and photodetection on fewer than four modes. This impossibility is due to the incoherence of the inputs; if the inputs were pure-state superpositions, it would be possible to obtain a perfect single-photon output. In the more general case, a chain of beam splitters can be used to increase the probability for a single photon, but at the expense of adding an additional multiphoton component. This improvement is robust against detector inefficiencies, but is degraded by distinguishable photons, dark counts or multiphoton components in the input
Low-impedance internal linear inductive antenna for large-area flat panel display plasma processing
Kim, K.N.; Jung, S.J.; Lee, Y.J.; Yeom, G.Y.; Lee, S.H.; Lee, J.K.
2005-01-01
An internal-type linear inductive antenna, that is, a double-comb-type antenna, was developed for a large-area plasma source having the size of 1020 mmx830 mm, and high density plasmas on the order of 2.3x10 11 cm -3 were obtained with 15 mTorr Ar at 5000 W of inductive power with good plasma stability. This is higher than that for the conventional serpentine-type antenna, possibly due to the low impedance, resulting in high efficiency of power transfer for the double-comb antenna type. In addition, due to the remarkable reduction of the antenna length, a plasma uniformity of less than 8% was obtained within the substrate area of 880 mmx660 mm at 5000 W without having a standing-wave effect
Linear Covariance Analysis and Epoch State Estimators
Markley, F. Landis; Carpenter, J. Russell
2014-01-01
This paper extends in two directions the results of prior work on generalized linear covariance analysis of both batch least-squares and sequential estimators. The first is an improved treatment of process noise in the batch, or epoch state, estimator with an epoch time that may be later than some or all of the measurements in the batch. The second is to account for process noise in specifying the gains in the epoch state estimator. We establish the conditions under which the latter estimator is equivalent to the Kalman filter.
NP-Hardness of optimizing the sum of Rational Linear Functions over an Asymptotic-Linear-Program
Chermakani, Deepak Ponvel
2012-01-01
We convert, within polynomial-time and sequential processing, an NP-Complete Problem into a real-variable problem of minimizing a sum of Rational Linear Functions constrained by an Asymptotic-Linear-Program. The coefficients and constants in the real-variable problem are 0, 1, -1, K, or -K, where K is the time parameter that tends to positive infinity. The number of variables, constraints, and rational linear functions in the objective, of the real-variable problem is bounded by a polynomial ...
Nonlinear fluctuation-induced rate equations for linear birth-death processes
Honkonen, J.
2008-01-01
The Fock-space approach to the solution of master equations for the one-step Markov processes is reconsidered. It is shown that in birth-death processes with an absorbing state at the bottom of the occupation-number spectrum and occupation-number independent annihilation probability occupation-number fluctuations give rise to rate equations drastically different from the polynomial form typical of birth-death processes. The fluctuation-induced rate equations with the characteristic exponential terms are derived for Mikhailov's ecological model and Lanchester's model of modern warfare
Nonlinear fluctuations-induced rate equations for linear birth-death processes
Honkonen, J.
2008-05-01
The Fock-space approach to the solution of master equations for one-step Markov processes is reconsidered. It is shown that in birth-death processes with an absorbing state at the bottom of the occupation-number spectrum and occupation-number independent annihilation probability of occupation-number fluctuations give rise to rate equations drastically different from the polynomial form typical of birth-death processes. The fluctuation-induced rate equations with the characteristic exponential terms are derived for Mikhailov’s ecological model and Lanchester’s model of modern warfare.
Yuxi Miao
2016-08-01
Full Text Available The free-piston gasoline engine linear generator (FPGLG is a new kind of power plant consisting of free-piston gasoline engines and a linear generator. Due to the elimination of the crankshaft mechanism, the piston motion process and the combustion heat release process affect each other significantly. In this paper, the combustion characteristics during the stable generating process of a FPGLG were presented using a numerical iteration method, which coupled a zero-dimensional piston dynamic model and a three-dimensional scavenging model with the combustion process simulation. The results indicated that, compared to the conventional engine (CE, the heat release process of the FPGLG lasted longer with a lower peak heat release rate. The indicated thermal efficiency of the engine was lower because less heat was released around the piston top dead centre (TDC. Very minimal difference was observed on the ignition delay duration between the FPGLG and the CE, while the post-combustion period of the FPGLG was significantly longer than that of the CE. Meanwhile, the FPGLG was found to operate more moderately due to lower peak in-cylinder gas pressure and a lower pressure rising rate. The potential advantage of the FPGLG in lower NOx emission was also proven with the simulation results presented in this paper.
Sequential Power-Dependence Theory
Buskens, Vincent; Rijt, Arnout van de
2008-01-01
Existing methods for predicting resource divisions in laboratory exchange networks do not take into account the sequential nature of the experimental setting. We extend network exchange theory by considering sequential exchange. We prove that Sequential Power-Dependence Theory—unlike
Digital signals processing using non-linear orthogonal transformation in frequency domain
Ivanichenko E.V.
2017-12-01
Full Text Available The rapid progress of computer technology in recent decades led to a wide introduction of methods of digital information processing practically in all fields of scientific research. In this case, among various applications of computing one of the most important places is occupied by digital processing systems signals (DSP that are used in data processing remote solution tasks of navigation of aerospace and marine objects, communications, radiophysics, digital optics and in a number of other applications. Digital Signal Processing (DSP is a dynamically developing an area that covers both technical and software tools. Related areas for digital signal processing are theory information, in particular, the theory of optimal signal reception and theory pattern recognition. In the first case, the main problem is signal extraction against a background of noise and interference of a different physical nature, and in the second - automatic recognition, i.e. classification and signal identification. In the digital processing of signals under a signal, we mean its mathematical description, i.e. a certain real function, containing information on the state or behavior of a physical system under an event that can be defined on a continuous or discrete space of time variation or spatial coordinates. In the broad sense, DSP systems mean a complex algorithmic, hardware and software. As a rule, systems contain specialized technical means of preliminary (or primary signal processing and special technical means for secondary processing of signals. Means of pretreatment are designed to process the original signals observed in general case against a background of random noise and interference of a different physical nature and represented in the form of discrete digital samples, for the purpose of detecting and selection (selection of the useful signal and evaluation characteristics of the detected signal. A new method of digital signal processing in the frequency
Application of mixed-integer linear programming in a car seats assembling process
Jorge Iván Perez Rave
2011-12-01
Full Text Available In this paper, a decision problem involving a car parts manufacturing company is modeled in order to prepare the company for an increase in demand. Mixed-integer linear programming was used with the following decision variables: creating a second shift, purchasing additional equipment, determining the required work force, and other alternatives involving new manners of work distribution that make it possible to separate certain operations from some workplaces and integrate them into others to minimize production costs. The model was solved using GAMS. The solution consisted of programming 19 workers under a configuration that merges two workplaces and separates some operations from some workplaces. The solution did not involve purchasing additional machinery or creating a second shift. As a result, the manufacturing paradigms that had been valid in the company for over 14 years were broken. This study allowed the company to increase its productivity and obtain significant savings. It also shows the benefits of joint work between academia and companies, and provides useful information for professors, students and engineers regarding production and continuous improvement.
Feasible Application Area Study for Linear Laser Cutting in Paper Making Processes
Happonen, A.; Stepanov, A.; Piili, H.
Traditional industry sectors, like paper making industry, tend to stay within well-known technology rather than going forward towards promising, but still quite new technical solutions and applications. This study analyses the feasibility of the laser cutting in large-scale industrial paper making processes. Aim was to reveal development and process related challenges and improvement potential in paper making processes by utilizing laser technology. This study has been carried out, because there still seems to be only few large-scale industrial laser processing applications in paper converting processes worldwide, even in the beginning of 2010's. Because of this, the small-scale use of lasers in paper material manufacturing industry is related to a shortage of well-known and widely available published research articles and published measurement data (e.g. actual achieved cut speeds with high quality cut edges, set-up times and so on). It was concluded that laser cutting has strong potential in industrial applications for paper making industries. This potential includes quality improvements and a competitive advantage for paper machine manufacturers and industry. The innovations have also added potential, when developing new paper products. An example of these kinds of products are ones with printed intelligence, which could be a new business opportunity for the paper industries all around the world.
Taddei, M.H.T.; Taddei, J.F.A.C.
2005-01-01
Due to biological risk and long half lives, the radionuclides 228 Ra, 226 Ra, 210 Pb and 210 Po should be frequently monitored to check for any environmental contamination around mines and uranium plants. Currently, the methods used for the determination of these radionuclides take about thirty days to reach the radioactive equilibrium of the 210 Pb and 226 Ra daughter's. The evaluation of effluent discharges and leakage of deposits to water bodies in monitoring programs, require quick answers to implement corrective measures. Thereby fast determination methods must be implemented. This work presents a fast and sequential method to, in three days, determine accurately and sensitively, 226 Ra, 228 Ra, 210 Pb, 210 Po, in water and effluent samples
Radiation processing of inhomogeneous objects at the 300 MeV electron linear accelerator
Demeshko, O.A.; Kochetov, S.S.; Makhnenko, L.A.; Melnitsky, I.V.; Shopen, O.A.
2009-01-01
Comparison is made between the calculated and experimental doses absorbed by complex density-inhomogeneous objects during their radiation processing. The process of fast electron passage through the object and depth dose formation has been simulated by the Monte Carlo technique with the use of the licensed program package PENELOPE. The calculated and experimental data are found to be in good agreement (∼ 30 %). Preliminary simulation of the process of object irradiation at given conditions provides the necessary information when developing the methods for a particular group of objects. This is of particular importance at performing bilateral irradiation, when an insignificant density variance of different objects may lead to appreciable errors of dose determination in the symmetry plane of the object.
Bounds for the probability distribution function of the linear ACD process
Fernandes, Marcelo
2003-01-01
Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.
Finite Abstractions of Max-Plus-Linear Systems : Theory and Algorithms
Adzkiya, D.
2014-01-01
Max-Plus-Linear (MPL) systems are a class of discrete-event systems with a continuous state space characterizing the timing of the underlying sequential discrete events. These systems are predisposed to describe the timing synchronization between interleaved processes. MPL systems are employed in
Sequential series for nuclear reactions
Izumo, Ko
1975-01-01
A new time-dependent treatment of nuclear reactions is given, in which the wave function of compound nucleus is expanded by a sequential series of the reaction processes. The wave functions of the sequential series form another complete set of compound nucleus at the limit Δt→0. It is pointed out that the wave function is characterized by the quantities: the number of degrees of freedom of motion n, the period of the motion (Poincare cycle) tsub(n), the delay time t sub(nμ) and the relaxation time tausub(n) to the equilibrium of compound nucleus, instead of the usual quantum number lambda, the energy eigenvalue Esub(lambda) and the total width GAMMAsub(lambda) of resonance levels, respectively. The transition matrix elements and the yields of nuclear reactions also become the functions of time given by the Fourier transform of the usual ones. The Poincare cycles of compound nuclei are compared with the observed correlations among resonance levels, which are about 10 -17 --10 -16 sec for medium and heavy nuclei and about 10 -20 sec for the intermediate resonances. (auth.)
Matthew J Simpson
Full Text Available Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction-diffusion process on 0
Simpson, Matthew J
2015-01-01
Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE) on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction-diffusion process on 0exact solutions with numerical approximations confirms the veracity of the method. Furthermore, our examples illustrate a delicate interplay between: (i) the rate at which the domain elongates, (ii) the diffusivity associated with the spreading density profile, (iii) the reaction rate, and (iv) the initial condition. Altering the balance between these four features leads to different outcomes in terms of whether an initial profile, located near x = 0, eventually overcomes the domain growth and colonizes the entire length of the domain by reaching the boundary where x = L(t).
Rao, H.M.; Ghaffari, B.; Yuan, W.; Jordon, J.B.; Badarinarayan, H.
2016-01-01
The microstructure and lap-shear behaviors of friction stir linear welded wrought Al alloy AA6022-T4 to cast Mg alloy AM60B joints were examined. A process window was developed to initially identify the potential process conditions. Multitudes of welds were produced by varying the tool rotation rate and tool traverse speed. Welds produced at 1500 revolutions per minute (rpm) tool rotation rate and either 50 mm/min or 75 mm/min tool traverse speed displayed the highest quasi-static failure load of ~3.3 kN per 30 mm wide lap-shear specimens. Analysis of cross sections of untested coupons indicated that the welds made at these optimum welding parameters had negligible microvoids and displayed a favorable weld geometry for the cold lap and hook features at the faying surface, compared to welds produced using other process parameters. Cross sections of the tested coupons indicated that the dominant crack initiated on the advancing side and progressed through the weld nugget, which consists of intermetallic compounds (IMC). This study demonstrates the feasibility of welding wrought Al and cast Mg alloy via friction stir linear welding with promising lap-shear strength results.
Rao, H.M. [Research & Development Division, Hitachi America Ltd., Farmington Hills, MI 48335 (United States); Ghaffari, B. [Research and Advanced Engineering, Ford Motor Company, Dearborn, MI 48121 (United States); Yuan, W., E-mail: wei.yuan@hitachi-automotive.us [Research & Development Division, Hitachi America Ltd., Farmington Hills, MI 48335 (United States); Jordon, J.B. [Department of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487 (United States); Badarinarayan, H. [Research & Development Division, Hitachi America Ltd., Farmington Hills, MI 48335 (United States)
2016-01-10
The microstructure and lap-shear behaviors of friction stir linear welded wrought Al alloy AA6022-T4 to cast Mg alloy AM60B joints were examined. A process window was developed to initially identify the potential process conditions. Multitudes of welds were produced by varying the tool rotation rate and tool traverse speed. Welds produced at 1500 revolutions per minute (rpm) tool rotation rate and either 50 mm/min or 75 mm/min tool traverse speed displayed the highest quasi-static failure load of ~3.3 kN per 30 mm wide lap-shear specimens. Analysis of cross sections of untested coupons indicated that the welds made at these optimum welding parameters had negligible microvoids and displayed a favorable weld geometry for the cold lap and hook features at the faying surface, compared to welds produced using other process parameters. Cross sections of the tested coupons indicated that the dominant crack initiated on the advancing side and progressed through the weld nugget, which consists of intermetallic compounds (IMC). This study demonstrates the feasibility of welding wrought Al and cast Mg alloy via friction stir linear welding with promising lap-shear strength results.
Mar'yanov, B.M.; Shumar, S.V.; Gavrilenko, M.A.
1994-01-01
A method for the computer processing of the curves of potentiometric differential titration using the precipitation reactions is developed. This method is based on transformation of the titration curve into a line of multiphase regression, whose parameters determine the equivalence points and the solubility products of the formed precipitates. The computational algorithm is tested using experimental curves for the titration of solutions containing Hg(2) and Cd(2) by the solution of sodium diethyldithiocarbamate. The random errors (RSD) for the titration of 1x10 -4 M solutions are in the range of 3-6%. 7 refs.; 2 figs.; 1 tab
Generalized Linear Covariance Analysis
Carpenter, James R.; Markley, F. Landis
2014-01-01
This talk presents a comprehensive approach to filter modeling for generalized covariance analysis of both batch least-squares and sequential estimators. We review and extend in two directions the results of prior work that allowed for partitioning of the state space into solve-for'' and consider'' parameters, accounted for differences between the formal values and the true values of the measurement noise, process noise, and textita priori solve-for and consider covariances, and explicitly partitioned the errors into subspaces containing only the influence of the measurement noise, process noise, and solve-for and consider covariances. In this work, we explicitly add sensitivity analysis to this prior work, and relax an implicit assumption that the batch estimator's epoch time occurs prior to the definitive span. We also apply the method to an integrated orbit and attitude problem, in which gyro and accelerometer errors, though not estimated, influence the orbit determination performance. We illustrate our results using two graphical presentations, which we call the variance sandpile'' and the sensitivity mosaic,'' and we compare the linear covariance results to confidence intervals associated with ensemble statistics from a Monte Carlo analysis.
Multi input single output model predictive control of non-linear bio-polymerization process
Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)
2015-05-15
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.
Porto da Silva, Edson
Digital signal processing (DSP) has become one of the main enabling technologies for the physical layer of coherent optical communication networks. The DSP subsystems are used to implement several functionalities in the digital domain, from synchronization to channel equalization. Flexibility...... nonlinearity compensation, (II) spectral shaping, and (III) adaptive equalization. For (I), original contributions are presented to the study of the nonlinearity compensation (NLC) with digital backpropagation (DBP). Numerical and experimental performance investigations are shown for different application...... scenarios. Concerning (II), it is demonstrated how optical and electrical (digital) pulse shaping can be allied to improve the spectral confinement of a particular class of optical time-division multiplexing (OTDM) signals that can be used as a building block for fast signaling single-carrier transceivers...
Sequential Analysis: Hypothesis Testing and Changepoint Detection
2014-07-11
maintains the flexibility of deciding sooner than the fixed sample size procedure at the price of some lower power [13, 514]. The sequential probability... markets , detection of signals with unknown arrival time in seismology, navigation, radar and sonar signal processing, speech segmentation, and the... skimming cruise missile can yield a significant increase in the probability of raid annihilation. Furthermore, usually detection systems are
PROCESS SIMULATION OF BENZENE SEPARATION COLUMN OF LINEAR ALKYL BENZENE (LABPLANT
Zaid A. AbdelRahman
2013-05-01
Full Text Available CHEMCAD process simulator was used for the analysis of existing benzene separation column in LAB plant(Arab Detergent Company/Beiji-Iraq. Simulated column performance curves were constructed. The variables considered in this study are the thermodynamic model option, top and bottom temperatures, feed temperature, feed composition & reflux ratio. Also simulated columns profiles for the temperature, vapor & liquid flow rates compositions, were constructed. Four different thermodynamic models options (SRK, TSRK, PR, and ESSO were used, affecting the results within 1-25% variation for the most cases. For Benzene Column (32 real stages, feed stage 14, the simulated results show that bottom temperature above 200 oC the weight fractions of top components, except benzene, increases sharply, where as benzene top weight fraction decreasing sharply. Also, feed temperature above 180 oC shows same trends. The column profiles remain fairly constant from tray 3 (immediately below condenser to tray 10 (immediately above feed and from tray 15 (immediately below feed to tray 25 (immediately above reboiler. Simulation of the benzene separation column in LAB production plant using CHEMCAD simulator, confirms the real plant operation data. The study gives evidence about a successful simulation with CHEMCAD.
Linear electron accelerators for medicine and radiation processing developed in Beijing, China
Benguang, G.
1981-01-01
Because of the wide applications in radiotherapy, sterilization, industrial radiography, irradiation processing, etc., the authors started to develop their own machines in this field in 1974. The first linac made in Beijing is a medical one, Model BJ-10. It was completed in 1977 and installed at the Beijing Municipal Tumor Institute and has been used in treatment for 3 years. The parameters of this radiotherapy equipment are determined from the requirements of the treatment on the deep and superficial tumors. In subsystems of this medical linac, the advanced techniques developed since the appearance of the first world's medical linac such as the isocentric gantry system, etc., are adopted as much as possible. The second machine is an industrial linac, Model BF-5 finished manufacturing in 1977 and installed at Beijing Irradiation Experiment Center. The BF-5 is the successor of the BJ-10 in various techniques. A series of irradiation experiments have been carried out on this machine. Now the authors are developing new linacs to meet the demand for cancer therapy, industrial radiography and other aspects in our country
Data acquisition and processing software for linear PSD based neutron diffractometers
Pande, S.S.; Borkar, S.P.; Ghodgaonkar, M.D.
2003-01-01
As a part of data acquisition system for various single and multi-PSD diffractometers software is developed to acquire the data and support the requirements of diffraction experiments. The software is a front-end Windows 98 application on PC and a transputer program on the MPSD card. The front-end application provides entire user interface required for data acquisition, control, presentation and system setup. Data is acquired and the diffraction spectra are generated in the transputer program. All the required hardware control is also implemented in the transputer program. The two programs communicate using a device driver named VTRANSPD. The software plays a vital role in customizing and integrating the data acquisition system for various diffractometer setups. Also the experiments are effectively automated in the software which has helped in making best use of available beam time. These and other features of the data acquisition and processing software are presented here. This software is being used along with the data acquisition system at a few single PSD and multi-PSD diffractometers. (author)
Andréa Cristina Fermiano Fidelis
2017-03-01
Full Text Available The growth of health structures and their complexity have led Clinical Engineering professionals to carry out studies to develop and implement health technology management programs. In this way, employees of this area, integrated with the health system teams, have contributed to make feasible the use of technologies that offer greater security, functionality and reliability. The radiotherapy area, with the increase in the incidence of new cases of cancer, together with the contingency of the financial resources for health, high cost and complexity of the equipment, motivate studies for its adequate management. This research aimed to identify the technologies applied in the radiotherapy treatment, in particular the linear accelerator, as well as the concept of innovation, innovation in services, innovation in processes and the competitiveness acquired with the aid of innovation. The method used in the research has a qualitative approach, with an exploratory and descriptive objective, with semistructured and open questions and involved bibliographic research on the topic of Innovation and on Linear Accelerator, document analysis, Unit of High Complexity in Oncology visit and interviews at the General Hospital of Caxias do Sul South, presenting, finally, the impacts suffered in the hospital and in the community after the arrival of the Line Accelerator. The results showed that there was process and product innovation, incrementally, in the services offered by the hospital.
Dao-ming, Lu
2018-05-01
The negativity of Wigner function (WF) is one of the important symbols of non-classical properties of light field. Therefore, it is of great significance to study the evolution of WF in dissipative process. The evolution formula of WF in laser process under the action of linear resonance force is given by virtue of thermo entangled state representation and the technique of integration within an ordered product of operator. As its application, the evolution of WF of thermal field and that of single-photon-added coherent state are discussed. The results show that the WF of thermal field maintains its original character. On the other hand, the negative region size and the depth of negativity of WF of single- photon-added coherent state decrease until it vanishes with dissipation. This shows that the non-classical property of single-photon-added coherent state is weakened, until it disappears with dissipation time increasing.
Random and cooperative sequential adsorption
Evans, J. W.
1993-10-01
Irreversible random sequential adsorption (RSA) on lattices, and continuum "car parking" analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation "jammed" state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.
FOGWELL, T.W.; LAST, G.V.
2003-01-01
The estimation of flux of contaminants through the vadose zone to the groundwater under varying geologic, hydrologic, and chemical conditions is key to making technically credible and sound decisions regarding soil site characterization and remediation, single-shell tank retrieval, and waste site closures (DOE 2000). One of the principal needs identified in the science and technology roadmap (DOE 2000) is the need to improve the conceptual and numerical models that describe the location of contaminants today, and to provide the basis for forecasting future movement of contaminants on both site-specific and site-wide scales. The State of Knowledge (DOE 1999) and Preliminary Concepts documents describe the importance of geochemical processes on the transport of contaminants through the Vadose Zone. These processes have been identified in the international list of Features, Events, and Processes (FEPs) (NEA 2000) and included in the list of FEPS currently being developed for Hanford Site assessments (Soler et al. 2001). The current vision for Hanford site-wide cumulative risk assessments as performed using the System Assessment Capability (SAC) is to represent contaminant adsorption using the linear isotherm (empirical distribution coefficient, K d ) sorption model. Integration Project Expert Panel (PEP) comments indicate that work is required to adequately justify the applicability of the linear sorption model, and to identify and defend the range of K d values that are adopted for assessments. The work plans developed for the Science and Technology (S and T) efforts, SAC, and the Core Projects must answer directly the question of ''Is there a scientific basis for the application of the linear sorption isotherm model to the complex wastes of the Hanford Site?'' This paper is intended to address these issues. The reason that well documented justification is required for using the linear sorption (K d ) model is that this approach is strictly empirical and is often
The Bacterial Sequential Markov Coalescent.
De Maio, Nicola; Wilson, Daniel J
2017-05-01
Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example, leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions (homoplasies) inconsistent with the hypothesis of a single evolutionary tree. Bacterial recombination is typically modeled as statistically akin to gene conversion in eukaryotes, i.e. , using the coalescent with gene conversion (CGC). However, this model can be very computationally demanding as it needs to account for the correlations of evolutionary histories of even distant loci. So, with the increasing popularity of whole genome sequencing, the need has emerged for a faster approach to model and simulate bacterial genome evolution. We present a new model that approximates the coalescent with gene conversion: the bacterial sequential Markov coalescent (BSMC). Our approach is based on a similar idea to the sequential Markov coalescent (SMC)-an approximation of the coalescent with crossover recombination. However, bacterial recombination poses hurdles to a sequential Markov approximation, as it leads to strong correlations and linkage disequilibrium across very distant sites in the genome. Our BSMC overcomes these difficulties, and shows a considerable reduction in computational demand compared to the exact CGC, and very similar patterns in simulated data. We implemented our BSMC model within new simulation software FastSimBac. In addition to the decreased computational demand compared to previous bacterial genome evolution simulators, FastSimBac provides more general options for evolutionary scenarios, allowing population structure with migration, speciation, population size changes, and recombination hotspots. FastSimBac is
The parallel-sequential field subtraction technique for coherent nonlinear ultrasonic imaging
Cheng, Jingwei; Potter, Jack N.; Drinkwater, Bruce W.
2018-06-01
Nonlinear imaging techniques have recently emerged which have the potential to detect cracks at a much earlier stage than was previously possible and have sensitivity to partially closed defects. This study explores a coherent imaging technique based on the subtraction of two modes of focusing: parallel, in which the elements are fired together with a delay law and sequential, in which elements are fired independently. In the parallel focusing a high intensity ultrasonic beam is formed in the specimen at the focal point. However, in sequential focusing only low intensity signals from individual elements enter the sample and the full matrix of transmit-receive signals is recorded and post-processed to form an image. Under linear elastic assumptions, both parallel and sequential images are expected to be identical. Here we measure the difference between these images and use this to characterise the nonlinearity of small closed fatigue cracks. In particular we monitor the change in relative phase and amplitude at the fundamental frequencies for each focal point and use this nonlinear coherent imaging metric to form images of the spatial distribution of nonlinearity. The results suggest the subtracted image can suppress linear features (e.g. back wall or large scatters) effectively when instrumentation noise compensation in applied, thereby allowing damage to be detected at an early stage (c. 15% of fatigue life) and reliably quantified in later fatigue life.
Shin, Boo Young; Han, Do Hung
2014-01-01
The aim of this study was to compatibilize immiscible polyamide 6 (PA6)/linear low density polyethylene (LLDPE) blend by using electron-beam initiated mediation process. Glycidyl methacrylate (GMA) was chosen as a mediator for cross-copolymerization at the interface between PA6 and LLDPE. The exposure process was carried out to initiate cross-copolymerization by the medium of GMA at the interface between PA and LLDPE. The mixture of the PA6/LLDPE/GMA was prepared by using a twin-screw extruder, and then the mixture was exposed to electron-beam radiation at various doses at room temperature. To investigate the results of this compatibilization strategy, the morphological and mechanical properties of the blend were analyzed. Morphology study revealed that the diameters of the dispersion particles decreased and the interfacial adhesion increased with respect to irradiation doses. The elongation at break of the blends increases significantly with increasing irradiation dose up to 100 kGy while the tensile strength and the modulus increased nonlinearly with increasing irradiation dose. The reaction mechanisms of the mediation process with the GMA mediator at the interface between PA6 and LLDPE were estimated. - Highlights: • PA6/LLDPE blend was compatibilized by the electron-beam initiated mediation process. • Interfacial adhesion was significantly enhanced by the radiation initiated cross-copolymerization. • The elongation at break of blend irradiated at 100 kGy was 4 times higher than PA6. • The GMA as a mediator played a key role in the electron-beam initiated mediation process
Chang, M.-C.; Shu, H.-Y.; Yu, H.-H.
2006-01-01
The zero-valent iron (ZVI) reduction succeeds for decolorization, while UV/H 2 O 2 oxidation process results into mineralization, so that this study proposed an integrated technique by reduction coupling with oxidation process in order to acquire simultaneously complete both decolorization and mineralization of C.I. Acid Black 24. From the experimental data, the zero-valent iron addition alone can decolorize the dye wastewater yet it demanded longer time than ZVI coupled with UV/H 2 O 2 processes (Red-Ox). Moreover, it resulted into only about 30% removal of the total organic carbon (TOC), which was capable to be effectively mineralized by UV/H 2 O 2 process. The proposed sequential ZVI-UV/H 2 O 2 integration system cannot only effectively remove color and TOC in AB 24 wastewater simultaneously but also save irradiation power and time demand. Furthermore, the decolorization rate constants were about 3.77-4.0 times magnitude comparing with that by UV/H 2 O 2 process alone
Sensitivity Analysis in Sequential Decision Models.
Chen, Qiushi; Ayer, Turgay; Chhatwal, Jagpreet
2017-02-01
Sequential decision problems are frequently encountered in medical decision making, which are commonly solved using Markov decision processes (MDPs). Modeling guidelines recommend conducting sensitivity analyses in decision-analytic models to assess the robustness of the model results against the uncertainty in model parameters. However, standard methods of conducting sensitivity analyses cannot be directly applied to sequential decision problems because this would require evaluating all possible decision sequences, typically in the order of trillions, which is not practically feasible. As a result, most MDP-based modeling studies do not examine confidence in their recommended policies. In this study, we provide an approach to estimate uncertainty and confidence in the results of sequential decision models. First, we provide a probabilistic univariate method to identify the most sensitive parameters in MDPs. Second, we present a probabilistic multivariate approach to estimate the overall confidence in the recommended optimal policy considering joint uncertainty in the model parameters. We provide a graphical representation, which we call a policy acceptability curve, to summarize the confidence in the optimal policy by incorporating stakeholders' willingness to accept the base case policy. For a cost-effectiveness analysis, we provide an approach to construct a cost-effectiveness acceptability frontier, which shows the most cost-effective policy as well as the confidence in that for a given willingness to pay threshold. We demonstrate our approach using a simple MDP case study. We developed a method to conduct sensitivity analysis in sequential decision models, which could increase the credibility of these models among stakeholders.
Adaptive sequential controller
El-Sharkawi, Mohamed A. (Renton, WA); Xing, Jian (Seattle, WA); Butler, Nicholas G. (Newberg, OR); Rodriguez, Alonso (Pasadena, CA)
1994-01-01
An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.
Adaptive sequential controller
El-Sharkawi, Mohamed A.; Xing, Jian; Butler, Nicholas G.; Rodriguez, Alonso
1994-01-01
An adaptive sequential controller (50/50') for controlling a circuit breaker (52) or other switching device to substantially eliminate transients on a distribution line caused by closing and opening the circuit breaker. The device adaptively compensates for changes in the response time of the circuit breaker due to aging and environmental effects. A potential transformer (70) provides a reference signal corresponding to the zero crossing of the voltage waveform, and a phase shift comparator circuit (96) compares the reference signal to the time at which any transient was produced when the circuit breaker closed, producing a signal indicative of the adaptive adjustment that should be made. Similarly, in controlling the opening of the circuit breaker, a current transformer (88) provides a reference signal that is compared against the time at which any transient is detected when the circuit breaker last opened. An adaptive adjustment circuit (102) produces a compensation time that is appropriately modified to account for changes in the circuit breaker response, including the effect of ambient conditions and aging. When next opened or closed, the circuit breaker is activated at an appropriately compensated time, so that it closes when the voltage crosses zero and opens when the current crosses zero, minimizing any transients on the distribution line. Phase angle can be used to control the opening of the circuit breaker relative to the reference signal provided by the potential transformer.
Computing Sequential Equilibria for Two-Player Games
Miltersen, Peter Bro; Sørensen, Troels Bjerre
2006-01-01
Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Koller and Pfeffer pointed out that the strategies...... obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming...... a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial time. In addition, the equilibrium we find is normal-form perfect. Our technique generalizes to general-sum games...
Computing sequential equilibria for two-player games
Miltersen, Peter Bro
2006-01-01
Koller, Megiddo and von Stengel showed how to efficiently compute minimax strategies for two-player extensive-form zero-sum games with imperfect information but perfect recall using linear programming and avoiding conversion to normal form. Their algorithm has been used by AI researchers...... for constructing prescriptive strategies for concrete, often fairly large games. Koller and Pfeffer pointed out that the strategies obtained by the algorithm are not necessarily sequentially rational and that this deficiency is often problematic for the practical applications. We show how to remove this deficiency...... by modifying the linear programs constructed by Koller, Megiddo and von Stengel so that pairs of strategies forming a sequential equilibrium are computed. In particular, we show that a sequential equilibrium for a two-player zero-sum game with imperfect information but perfect recall can be found in polynomial...
P. Y. Rogov
2015-09-01
Full Text Available The paper deals with mathematical model of linear and nonlinear processes occurring at the propagation of femtosecond laser pulses in the vitreous of the human eye. Methods of computing modeling are applied for the nonlinear spectral equation solution describing the dynamics of a two-dimensional TE-polarized radiation in a homogeneous isotropic medium with cubic fast-response nonlinearity without the usage of slowly varying envelope approximation. Environments close to the optical media parameters of the eye were used for the simulation. The model of femtosecond radiation propagation takes into account the process dynamics for dispersion broadening of pulses in time and the occurence of the self-focusing near the retina when passing through the vitreous body of the eye. Dependence between the pulse duration on the retina has been revealed and the duration of the input pulse and the values of power density at which there is self-focusing have been found. It is shown that the main mechanism of radiation damage with the use of titanium-sapphire laser is photoionization. The results coincide with those obtained by the other scientists, and are usable for creation Russian laser safety standards for femtosecond laser systems.
Vilaragut Llanes, J.J.; Ferro Fernandez, R.; Rodriguez Marti, M.; Ramirez, M.L.; Perez Mulas, A.; Barrientos Montero, M.; Ortiz Lopez, P.; Somoano, F.; Delgado Rodriguez, J.M.; Papadopulos, S.B.; Pereira, P.P. Jr.; Lopez Morones, R.; Larrinaga Cortinai, E.; Rivero Oliva, J.J.; Alemany, J.
2008-01-01
This paper presents the results of the Probabilistic Safety Assessment (PSA) to the radiotherapy treatment process with an Electron Linear Accelerator (LINAC) for Medical Uses, which was conducted in the framework of the Extra budgetary Programme on Nuclear and Radiological Safety in Iberian-America. The PSA tools were used to evaluate occupational, public and medical exposures during treatment. The study focused on the radiological protection of patients. Equipment Failure Modes and Human Errors were evaluated for each system and treatment phase by FMEA. It was aimed at obtaining an exhaustive list of deviations with a reasonable probability of occurrence and which might produce significant adverse outcomes. Separate events trees were constructed for each initiating event group. Each event tree had a different structure since the initiating events were grouped according to mitigation requirements. Fault tree models were constructed for each top event. The fault trees were developed up to the level of components. In addition to hardware faults, the fault trees included human errors associated with the response to accidents, and human errors associated with the treatment. Each accident sequence was quantified. The combination of the initiating event and top events through one fault tree was the method used to analyse the accident sequences. After combining the appropriate models, a Boolean reduction was conducted by computer software to produce sequence cut sets. Several findings were analysed concerning the treatment process and the study proposed safety recommendations to avoid them. (author)
Galbraith, R.F.; Laslett, G.M.; Green, P.F.; Duddy, I.R.
1990-01-01
Spontaneous fission of uranium atoms over geological time creates a random process of linearly shaped features (fission tracks) inside an apatite crystal. The theoretical distributions associated with this process are governed by the elapsed time and temperature history, but other factors are also reflected in empirical measurements as consequences of sampling by plane section and chemical etching. These include geometrical biases leading to over-representation of long tracks, the shape and orientation of host features when sampling totally confined tracks, and 'gaps' in heavily annealed tracks. We study the estimation of geological parameters in the presence of these factors using measurements on both confined tracks and projected semi-tracks. Of particular interest is a history of sedimentation, uplift and erosion giving rise to a two-component mixture of tracks in which the parameters reflect the current temperature, the maximum temperature and the timing of uplift. A full likelihood analysis based on all measured densities, lengths and orientations is feasible, but because some geometrical biases and measurement limitations are only partly understood it seems preferable to use conditional likelihoods given numbers and orientations of confined tracks. (author)
Vilaragut Llanes, Juan Jose; Fernandez, Ruben Ferro; Ortiz Lopez, Pedro
2009-01-01
The radiation safety assessments traditionally have been based on analyzing the lessons you learn of new events that are becoming known. Although these methods are very valuable, their main limitation is that only cover known events and leave without consider other possible failures that have occurred or have not been published, This does not mean they can not occur. Other tools to analyze prospectively the safety, among which found Probabilistic Safety Assessment (PSA). This paper summarizes the project of American Forum of agencies radiological and nuclear regulators aimed at applying the methods of APS treatment process with a linear accelerator. We defined as unintended consequences accidental exposures both single patient and multiple patients. FMEA methodology was used to define events initiators of accidents and methods of event trees and trees failure to identify the accident sequences that may occur. A Once quantified the frequency of occurrence of accidental sequences Analyses of importance in determining the most recent events significant from the point of view of safety. We identified 158 of equipment failure modes and 295 errors human if they occurred would have the potential to cause the accidental exposures defined. We studied 118 of initiating events accident and 120 barriers. We studied 434 accident sequences. The accidental exposure of a single patient were 40 times likely that multiple patients. 100% of the total frequency of accidental exposures on a single patient is caused by human errors . 8% of the total frequency of accidental exposures on multiple patients initiating events may occur by equipment failure (Computerized tomography, treatment planning system, throttle linear) and 92% by human error. As part of the and recommendations of the study presents the events that are more contribution on the reduction of risk of accidental exposure. (author)
Soliman, Moomen; Eldyasti, Ahmed
2017-06-01
Recently, partial nitrification has been adopted widely either for the nitrite shunt process or intermediate nitrite generation step for the Anammox process. However, partial nitrification has been hindered by the complexity of maintaining stable nitrite accumulation at high nitrogen loading rates (NLR) which affect the feasibility of the process for high nitrogen content wastewater. Thus, the operational data of a lab scale SBR performing complete partial nitrification as a first step of nitrite shunt process at NLRs of 0.3-1.2kg/(m 3 d) have been used to calibrate and validate a process model developed using BioWin® in order to describe the long-term dynamic behavior of the SBR. Moreover, an identifiability analysis step has been introduced to the calibration protocol to eliminate the needs of the respirometric analysis for SBR models. The calibrated model was able to predict accurately the daily effluent ammonia, nitrate, nitrite, alkalinity concentrations and pH during all different operational conditions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Crashworthiness design optimization using multipoint sequential linear programming
Etman, L.F.P.; Adriaens, J.M.T.A.; Slagmaat, van M.T.P.; Schoofs, A.J.G.
1996-01-01
A design optimization tool has been developed for the crash victim simulation software MADYMO. The crash worthiness optimization problem is characterized by a noisy behaviour of objective function and constraints. Additionally, objective function and constraint values follow from a computationally
Zhao, T.
2018-01-01
The popularity of thermoplastic composites (TPCs) has been growing steadily in the last decades in the aircraft industry. This is not only because of their excellent material properties, but also owing to their fast and cost-effective manufacturing process. Fusion bonding, or welding, is a typical
Fast regularizing sequential subspace optimization in Banach spaces
Schöpfer, F; Schuster, T
2009-01-01
We are concerned with fast computations of regularized solutions of linear operator equations in Banach spaces in case only noisy data are available. To this end we modify recently developed sequential subspace optimization methods in such a way that the therein employed Bregman projections onto hyperplanes are replaced by Bregman projections onto stripes whose width is in the order of the noise level
Quantum Inequalities and Sequential Measurements
Candelpergher, B.; Grandouz, T.; Rubinx, J.L.
2011-01-01
In this article, the peculiar context of sequential measurements is chosen in order to analyze the quantum specificity in the two most famous examples of Heisenberg and Bell inequalities: Results are found at some interesting variance with customary textbook materials, where the context of initial state re-initialization is described. A key-point of the analysis is the possibility of defining Joint Probability Distributions for sequential random variables associated to quantum operators. Within the sequential context, it is shown that Joint Probability Distributions can be defined in situations where not all of the quantum operators (corresponding to random variables) do commute two by two. (authors)
Comparing two Poisson populations sequentially: an application
Halteman, E.J.
1986-01-01
Rocky Flats Plant in Golden, Colorado monitors each of its employees for radiation exposure. Excess exposure is detected by comparing the means of two Poisson populations. A sequential probability ratio test (SPRT) is proposed as a replacement for the fixed sample normal approximation test. A uniformly most efficient SPRT exists, however logistics suggest using a truncated SPRT. The truncated SPRT is evaluated in detail and shown to possess large potential savings in average time spent by employees in the monitoring process
Sequential test procedures for inventory differences
Goldman, A.S.; Kern, E.A.; Emeigh, C.W.
1985-01-01
By means of a simulation study, we investigated the appropriateness of Page's and power-one sequential tests on sequences of inventory differences obtained from an example materials control unit, a sub-area of a hypothetical UF 6 -to-U 3 O 8 conversion process. The study examined detection probability and run length curves obtained from different loss scenarios. 12 refs., 10 figs., 2 tabs
Jaimes Figueroa; Jaiver Efren; Rodrigues; Maria Isabel; Wolf Maciel; Maria Regina
2016-01-01
Nowadays, one of the methods available to obtain the anhydrous ethanol is the extractive distillation process, which presents great potential depending on the solvent used. It is imperative that the solvent promotes dehydration, but low cost, the low energy consumption, and low waste generation and emissions must be taken into account. Within this context, there is high demand for new efficient solvents for extractive distillation of ethanol-water mixture, so, the ionic liquids (ILs) have som...
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-06-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
Framework for sequential approximate optimization
Jacobs, J.H.; Etman, L.F.P.; Keulen, van F.; Rooda, J.E.
2004-01-01
An object-oriented framework for Sequential Approximate Optimization (SAO) isproposed. The framework aims to provide an open environment for thespecification and implementation of SAO strategies. The framework is based onthe Python programming language and contains a toolbox of Python
Fan-Yun Pai
2015-11-01
Full Text Available To consistently produce high quality products, a quality management system, such as the ISO9001, 2000 or TS 16949 must be practically implemented. One core instrument of the TS16949 MSA (Measurement System Analysis is to rank the capability of a measurement system and ensure the quality characteristics of the product would likely be transformed through the whole manufacturing process. It is important to reduce the risk of Type I errors (acceptable goods are misjudged as defective parts and Type II errors (defective parts are misjudged as good parts. An ideal measuring system would have the statistical characteristic of zero error, but such a system could hardly exist. Hence, to maintain better control of the variance that might occur in the manufacturing process, MSA is necessary for better quality control. Ball screws, which are a key component in precision machines, have significant attributes with respect to positioning and transmitting. Failures of lead accuracy and axial-gap of a ball screw can cause negative and expensive effects in machine positioning accuracy. Consequently, a functional measurement system can incur great savings by detecting Type I and Type II errors. If the measurement system fails with respect to specification of the product, it will likely misjudge Type I and Type II errors. Inspectors normally follow the MSA regulations for accuracy measurement, but the choice of measuring system does not merely depend on some simple indices. In this paper, we examine the stability of a measuring system by using a Monte Carlo simulation to establish bias, linearity variance of the normal distribution, and the probability density function. Further, we forecast the possible area distribution in the real case. After the simulation, the measurement capability will be improved, which helps the user classify the measurement system and establish measurement regulations for better performance and monitoring of the precision of the ball screw.
Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha
2018-01-01
Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets.
Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn
2018-03-01
Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer
Sequentially pulsed traveling wave accelerator
Caporaso, George J [Livermore, CA; Nelson, Scott D [Patterson, CA; Poole, Brian R [Tracy, CA
2009-08-18
A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.
Shiba, Hajime; Yabu, Takeshi; Sudayama, Makoto; Mano, Nobuhiro; Arai, Naoto; Nakanishi, Teruyuki; Hosono, Kuniaki
2016-04-15
To elucidate the degradation process of the posterior silk gland during metamorphosis of the silkworm ITALIC! Bombyx mori, tissues collected on the 6th day after entering the 5th instar (V6), prior to spinning (PS), during spinning (SP) and after cocoon formation (CO) were used to analyze macroautophagy, chaperone-mediated autophagy (CMA) and the adenosine triphosphate (ATP)-dependent ubiquitin proteasome. Immediately after entering metamorphosis stage PS, the levels of ATP and phosphorylated p70S6 kinase protein decreased spontaneously and continued to decline at SP, followed by a notable restoration at CO. In contrast, phosphorylated AMP-activated protein kinase α (AMPKα) showed increases at SP and CO. Most of the Atg8 protein was converted to form II at all stages. The levels of ubiquitinated proteins were high at SP and CO, and low at PS. The proteasome activity was high at V6 and PS but low at SP and CO. In the isolated lysosome fractions, levels of Hsc70/Hsp70 protein began to increase at PS and continued to rise at SP and CO. The lysosomal cathepsin B/L activity showed a dramatic increase at CO. Our results clearly demonstrate that macroautophagy occurs before entering the metamorphosis stage and strongly suggest that the CMA pathway may play an important role in the histolysis of the posterior silk gland during metamorphosis. © 2016. Published by The Company of Biologists Ltd.
Masatoshi Hasegawa
2017-10-01
Full Text Available This paper reviews the development of new high-temperature polymeric materials applicable to plastic substrates in image display devices with a focus on our previous results. Novel solution-processable colorless polyimides (PIs with ultra-low linear coefficients of thermal expansion (CTE are proposed in this paper. First, the principles of the coloration of PI films are briefly discussed, including the influence of the processing conditions on the film coloration, as well as the chemical and physical factors dominating the low CTE characteristics of the resultant PI films to clarify the challenges in simultaneously achieving excellent optical transparency, a very high Tg, a very low CTE, and excellent film toughness. A possible approach of achieving these target properties is to use semi-cycloaliphatic PI systems consisting of linear chain structures. However, semi-cycloaliphatic PIs obtained using cycloaliphatic diamines suffer various problems during precursor polymerization, cyclodehydration (imidization, and film preparation. In particular, when using trans-1,4-cyclohexanediamine (t-CHDA as the cycloaliphatic diamine, a serious problem emerges: salt formation in the initial stages of the precursor polymerization, which terminates the polymerization in some cases or significantly extends the reaction period. The system derived from 3,3′,4,4′-biphenyltetracarboxylic dianhydride (s-BPDA and t-CHDA can be polymerized by a controlled heating method and leads to a PI film with relatively good properties, i.e., excellent light transmittance at 400 nm (T400 = ~80%, a high Tg (>300 °C, and a very low CTE (10 ppm·K−1. However, this PI film is somewhat brittle (the maximum elongation at break, εb max is about 10%. On the other hand, the combination of cycloaliphatic tetracarboxylic dianhydrides and aromatic diamines does not result in salt formation. The steric structures of cycloaliphatic tetracarboxylic dianhydrides significantly influence
Akizuki, S; Toda, T
2018-04-01
Although combination of denitritation and methanogenesis for wastewater treatment has been widely investigated, an application of this technology to solid waste treatment has been rarely studied. This study investigated an anaerobic-aerobic batch system with simultaneous denitritation-methanogenesis as an effective treatment for marine biofoulings, which is a major source of intermittently discharged organic solid wastes. Preliminary NO 2 - -exposed sludge was inoculated to achieve stable methanogenesis process without NO 2 - inhibition. Both high NH 4 + -N removal of 99.5% and high NO 2 - -N accumulation of 96.4% were achieved on average during the nitritation step. Sufficient CH 4 recovery of 101 L-CH 4 kg-COD -1 was achieved, indicating that the use of NO 2 - -exposed sludge is effective to avoid NO 2 - inhibition on methanogenesis. Methanogenesis was the main COD utilization pathway when the substrate solubilization occurred actively, while denitritation was the main when solubilization was limited because of substrate shortage. The results showed a high COD removal efficiency of 96.0% and a relatively low nitrogen removal efficiency of 64.4%. Fitting equations were developed to optimize the effluent exchange ratio. The estimated results showed that the increase of effluent exchange ratio during the active solubilization period increased the nitrogen removal efficiency but decreased CH 4 content in biogas. An appropriate effluent exchange ratio with high anaerobic effluent quality below approximately 120 mg-N L -1 as well as sufficient CH 4 gas quality which can be used as fuel for gas engine generator was achieved by daily effluent exchange of 80% during the first week and 5% during the subsequent 8 days. Copyright © 2017 Elsevier Ltd. All rights reserved.
Sequential experiments with primes
Caragiu, Mihai
2017-01-01
With a specific focus on the mathematical life in small undergraduate colleges, this book presents a variety of elementary number theory insights involving sequences largely built from prime numbers and contingent number-theoretic functions. Chapters include new mathematical ideas and open problems, some of which are proved in the text. Vector valued MGPF sequences, extensions of Conway’s Subprime Fibonacci sequences, and linear complexity of bit streams derived from GPF sequences are among the topics covered in this book. This book is perfect for the pure-mathematics-minded educator in a small undergraduate college as well as graduate students and advanced undergraduate students looking for a significant high-impact learning experience in mathematics.
Sequential-Simultaneous Analysis of Japanese Children's Performance on the Japanese McCarthy.
Ishikuma, Toshinori; And Others
This study explored the hypothesis that Japanese children perform significantly better on simultaneous processing than on sequential processing. The Kaufman Assessment Battery for Children (K-ABC) served as the criterion of the two types of mental processing. Regression equations to predict Sequential and Simultaneous processing from McCarthy…
Feng, Huihua; Guo, Chendong; Jia, Boru; Zuo, Zhengxing; Guo, Yuyao; Roskilly, Tony
2016-01-01
Highlights: • The intermediate process of free-piston linear generator is investigated for the first time. • “Gradually switching strategy” is the best strategy in the intermediate process. • Switching at the top dead center position timing has the least influences on free-piston linear generator. • After the intermediate process, the operation parameters value is smaller than those before the intermediate process. - Abstract: The free-piston linear generator (FPLG) has more merits than the traditional reciprocating engines (TRE), and has been under extensive investigation. Researchers mainly investigated on the starting process and the stable generating process of FPLG, while there has not been any report on the intermediate process from the engine cold start-up to stable operation process. Therefore, this paper investigated the intermediate process of the FPLG in terms of switching strategy and switching position based on simulation results and test results. Results showed that when the motor force of the linear electric machine (LEM) declined gradually from 100% to 0% with an interval of 50%, and then to a resistance force in the opposite direction of piston velocity (generator mode), the operation parameters of the FPLG showed minimal changes. Meanwhile, the engine operated more smoothly when the LEM switched its working mode from a motor to a generator at the piston dead center, compared with that at the middle stroke or a random switching time. More importantly, after the intermediate process, the operation parameters of FPLG were smaller than that before the intermediate process. As a result, a gradual motor/generator switching strategy was recommended and the LEM was suggested to switch its working mode when the piston arrived its dead center in order to achieve smooth engine operation.
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Remarks on sequential designs in risk assessment
Seidenfeld, T.
1982-01-01
The special merits of sequential designs are reviewed in light of particular challenges that attend risk assessment for human population. The kinds of ''statistical inference'' are distinguished and the problem of design which is pursued is the clash between Neyman-Pearson and Bayesian programs of sequential design. The value of sequential designs is discussed and the Neyman-Pearson vs. Bayesian sequential designs are probed in particular. Finally, warnings with sequential designs are considered, especially in relation to utilitarianism
Tracing Sequential Video Production
Otrel-Cass, Kathrin; Khalid, Md. Saifuddin
2015-01-01
, for one week in 2014, and collected and analyzed visual data to learn about scientists’ practices. The visual material that was collected represented the agreed on material artifacts that should aid the students' reflective process to make sense of science technology practices. It was up to the student...... video, nature of the interactional space, and material and spatial semiotics....
Ligier, Nicolas; Carter, John; Poulet, François; Langevin, Yves; Dumas, Christophe; Gourgeot, Florian
2016-04-01
Jupiter's moon Europa harbors a very young surface dated, based on cratering rates, to 10-50 M.y (Zahnle et al. 1998, Pappalardo et al. 1999). This young age implies rapid surface recycling and reprocessing, partially engendered by a global salty subsurface liquid ocean that could result in tectonic activity (Schmidt et al. 2011, Kattenhorn et al. 2014) and active plumes (Roth et al. 2014). The surface of Europa should contain important clues about the composition of this sub-surface briny ocean and about the potential presence of material of exobiological interest in it, thus reinforcing Europa as a major target of interest for upcoming space missions such as the ESA L-class mission JUICE. To perform the investigation of the composition of the surface of Europa, a global mapping campaign of the satellite was performed between October 2011 and January 2012 with the integral field spectrograph SINFONI on the Very Large Telescope (VLT) in Chile. The high spectral binning of this instrument (0.5 nm) is suitable to detect any narrow mineral signature in the wavelength range 1.45-2.45 μm. The spatially resolved spectra we obtained over five epochs nearly cover the entire surface of Europa with a pixel scale of 12.5 by 25 m.a.s (~35 by 70 km on Europa's surface), thus permitting a global scale study. Until recently, a large majority of studies only proposed sulfate salts along with sulfuric acid hydrate and water-ice to be present on Europa's surface. However, recent works based on Europa's surface coloration in the visible wavelength range and NIR spectral analysis support the hypothesis of the predominance of chlorine salts instead of sulfate salts (Hand & Carlson 2015, Fischer et al. 2015). Our linear spectral modeling supports this new hypothesis insofar as the use of Mg-bearing chlorines improved the fits whatever the region. As expected, the distribution of sulfuric acid hydrate is correlated to the Iogenic sulfur ion implantation flux distribution (Hendrix et al
de Oliveira, Luciana Renata; Bazzani, Armando; Giampieri, Enrico; Castellani, Gastone C
2014-08-14
We propose a non-equilibrium thermodynamical description in terms of the Chemical Master Equation (CME) to characterize the dynamics of a chemical cycle chain reaction among m different species. These systems can be closed or open for energy and molecules exchange with the environment, which determines how they relax to the stationary state. Closed systems reach an equilibrium state (characterized by the detailed balance condition (D.B.)), while open systems will reach a non-equilibrium steady state (NESS). The principal difference between D.B. and NESS is due to the presence of chemical fluxes. In the D.B. condition the fluxes are absent while for the NESS case, the chemical fluxes are necessary for the state maintaining. All the biological systems are characterized by their "far from equilibrium behavior," hence the NESS is a good candidate for a realistic description of the dynamical and thermodynamical properties of living organisms. In this work we consider a CME written in terms of a discrete Kolmogorov forward equation, which lead us to write explicitly the non-equilibrium chemical fluxes. For systems in NESS, we show that there is a non-conservative "external vector field" whose is linearly proportional to the chemical fluxes. We also demonstrate that the modulation of these external fields does not change their stationary distributions, which ensure us to study the same system and outline the differences in the system's behavior when it switches from the D.B. regime to NESS. We were interested to see how the non-equilibrium fluxes influence the relaxation process during the reaching of the stationary distribution. By performing analytical and numerical analysis, our central result is that the presence of the non-equilibrium chemical fluxes reduces the characteristic relaxation time with respect to the D.B. condition. Within a biochemical and biological perspective, this result can be related to the "plasticity property" of biological systems and to their
Oliveira, Luciana Renata de; Bazzani, Armando; Giampieri, Enrico; Castellani, Gastone C.
2014-01-01
We propose a non-equilibrium thermodynamical description in terms of the Chemical Master Equation (CME) to characterize the dynamics of a chemical cycle chain reaction among m different species. These systems can be closed or open for energy and molecules exchange with the environment, which determines how they relax to the stationary state. Closed systems reach an equilibrium state (characterized by the detailed balance condition (D.B.)), while open systems will reach a non-equilibrium steady state (NESS). The principal difference between D.B. and NESS is due to the presence of chemical fluxes. In the D.B. condition the fluxes are absent while for the NESS case, the chemical fluxes are necessary for the state maintaining. All the biological systems are characterized by their “far from equilibrium behavior,” hence the NESS is a good candidate for a realistic description of the dynamical and thermodynamical properties of living organisms. In this work we consider a CME written in terms of a discrete Kolmogorov forward equation, which lead us to write explicitly the non-equilibrium chemical fluxes. For systems in NESS, we show that there is a non-conservative “external vector field” whose is linearly proportional to the chemical fluxes. We also demonstrate that the modulation of these external fields does not change their stationary distributions, which ensure us to study the same system and outline the differences in the system's behavior when it switches from the D.B. regime to NESS. We were interested to see how the non-equilibrium fluxes influence the relaxation process during the reaching of the stationary distribution. By performing analytical and numerical analysis, our central result is that the presence of the non-equilibrium chemical fluxes reduces the characteristic relaxation time with respect to the D.B. condition. Within a biochemical and biological perspective, this result can be related to the “plasticity property” of biological
Sequential lineups: shift in criterion or decision strategy?
Gronlund, Scott D
2004-04-01
R. C. L. Lindsay and G. L. Wells (1985) argued that a sequential lineup enhanced discriminability because it elicited use of an absolute decision strategy. E. B. Ebbesen and H. D. Flowe (2002) argued that a sequential lineup led witnesses to adopt a more conservative response criterion, thereby affecting bias, not discriminability. Height was encoded as absolute (e.g., 6 ft [1.83 m] tall) or relative (e.g., taller than). If a sequential lineup elicited an absolute decision strategy, the principle of transfer-appropriate processing predicted that performance should be best when height was encoded absolutely. Conversely, if a simultaneous lineup elicited a relative decision strategy, performance should be best when height was encoded relatively. The predicted interaction was observed, providing direct evidence for the decision strategies explanation of what happens when witnesses view a sequential lineup.
Sequential versus simultaneous market delineation
Haldrup, Niels; Møllgaard, Peter; Kastberg Nielsen, Claus
2005-01-01
and geographical markets. Using a unique data setfor prices of Norwegian and Scottish salmon, we propose a methodologyfor simultaneous market delineation and we demonstrate that comparedto a sequential approach conclusions will be reversed.JEL: C3, K21, L41, Q22Keywords: Relevant market, econometric delineation......Delineation of the relevant market forms a pivotal part of most antitrustcases. The standard approach is sequential. First the product marketis delineated, then the geographical market is defined. Demand andsupply substitution in both the product dimension and the geographicaldimension...
Sequential logic analysis and synthesis
Cavanagh, Joseph
2007-01-01
Until now, there was no single resource for actual digital system design. Using both basic and advanced concepts, Sequential Logic: Analysis and Synthesis offers a thorough exposition of the analysis and synthesis of both synchronous and asynchronous sequential machines. With 25 years of experience in designing computing equipment, the author stresses the practical design of state machines. He clearly delineates each step of the structured and rigorous design principles that can be applied to practical applications. The book begins by reviewing the analysis of combinatorial logic and Boolean a
Antonov, Y.; Zhuravleva, I.; Cardinaels, R.M.; Moldenaers, P.
2017-01-01
We study thermal aggregation and disaggregation processes in complex carrageenan/lysozyme systems with a different linear charge density of the sulphated polysaccharide. To this end, we determine the temperature dependency of the turbidity and the intensity size distribution functions in complex
Deboeck, Pascal R.; Boker, Steven M.; Bergeman, C. S.
2008-01-01
Among the many methods available for modeling intraindividual time series, differential equation modeling has several advantages that make it promising for applications to psychological data. One interesting differential equation model is that of the damped linear oscillator (DLO), which can be used to model variables that have a tendency to…
Estimation After a Group Sequential Trial.
Milanzi, Elasma; Molenberghs, Geert; Alonso, Ariel; Kenward, Michael G; Tsiatis, Anastasios A; Davidian, Marie; Verbeke, Geert
2015-10-01
Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2 n . In this paper, we consider the more practically useful setting of sample sizes in a the finite set { n 1 , n 2 , …, n L }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why
Automatic synthesis of sequential control schemes
Klein, I.
1993-01-01
Of all hard- and software developed for industrial control purposes, the majority is devoted to sequential, or binary valued, control and only a minor part to classical linear control. Typically, the sequential parts of the controller are invoked during startup and shut-down to bring the system into its normal operating region and into some safe standby region, respectively. Despite its importance, fairly little theoretical research has been devoted to this area, and sequential control programs are therefore still created manually without much theoretical support to obtain a systematic approach. We propose a method to create sequential control programs automatically. The main ideas is to spend some effort off-line modelling the plant, and from this model generate the control strategy, that is the plan. The plant is modelled using action structures, thereby concentrating on the actions instead of the states of the plant. In general the planning problem shows exponential complexity in the number of state variables. However, by focusing on the actions, we can identify problem classes as well as algorithms such that the planning complexity is reduced to polynomial complexity. We prove that these algorithms are sound, i.e., the generated solution will solve the stated problem, and complete, i.e., if the algorithms fail, then no solution exists. The algorithms generate a plan as a set of actions and a partial order on this set specifying the execution order. The generated plant is proven to be minimal and maximally parallel. For a larger class of problems we propose a method to split the original problem into a number of simple problems that can each be solved using one of the presented algorithms. It is also shown how a plan can be translated into a GRAFCET chart, and to illustrate these ideas we have implemented a planing tool, i.e., a system that is able to automatically create control schemes. Such a tool can of course also be used on-line if it is fast enough. This
Morrow, A; Rangaraj, D; Perez-Andujar, A; Krishnamurthy, N
2016-01-01
Purpose: This work’s objective is to determine the overlap of processes, in terms of sub-processes and time, between acceptance testing and commissioning of a conventional medical linear accelerator and to evaluate the time saved by consolidating the two processes. Method: A process map for acceptance testing for medical linear accelerators was created from vendor documentation (Varian and Elekta). Using AAPM TG-106 and inhouse commissioning procedures, a process map was created for commissioning of said accelerators. The time to complete each sub-process in each process map was evaluated. Redundancies in the processes were found and the time spent on each were calculated. Results: Mechanical testing significantly overlaps between the two processes - redundant work here amounts to 9.5 hours. Many beam non-scanning dosimetry tests overlap resulting in another 6 hours of overlap. Beam scanning overlaps somewhat - acceptance tests include evaluating PDDs and multiple profiles but for only one field size while commissioning beam scanning includes multiple field sizes and depths of profiles. This overlap results in another 6 hours of rework. Absolute dosimetry, field outputs, and end to end tests are not done at all in acceptance testing. Finally, all imaging tests done in acceptance are repeated in commissioning, resulting in about 8 hours of rework. The total time overlap between the two processes is about 30 hours. Conclusion: The process mapping done in this study shows that there are no tests done in acceptance testing that are not also recommended to do for commissioning. This results in about 30 hours of redundant work when preparing a conventional linear accelerator for clinical use. Considering these findings in the context of the 5000 linacs in the United states, consolidating acceptance testing and commissioning would have allowed for the treatment of an additional 25000 patients using no additional resources.
Morrow, A [Scott & White Hospital Temple, TX (United States); Rangaraj, D [Baylor Scott & White Health, Temple, TX (United States); Perez-Andujar, A [University of California San Francisco, San Francisco, CA (United States); Krishnamurthy, N [Baylor Scott & White Healthcare, Temple, TX (United States)
2016-06-15
Purpose: This work’s objective is to determine the overlap of processes, in terms of sub-processes and time, between acceptance testing and commissioning of a conventional medical linear accelerator and to evaluate the time saved by consolidating the two processes. Method: A process map for acceptance testing for medical linear accelerators was created from vendor documentation (Varian and Elekta). Using AAPM TG-106 and inhouse commissioning procedures, a process map was created for commissioning of said accelerators. The time to complete each sub-process in each process map was evaluated. Redundancies in the processes were found and the time spent on each were calculated. Results: Mechanical testing significantly overlaps between the two processes - redundant work here amounts to 9.5 hours. Many beam non-scanning dosimetry tests overlap resulting in another 6 hours of overlap. Beam scanning overlaps somewhat - acceptance tests include evaluating PDDs and multiple profiles but for only one field size while commissioning beam scanning includes multiple field sizes and depths of profiles. This overlap results in another 6 hours of rework. Absolute dosimetry, field outputs, and end to end tests are not done at all in acceptance testing. Finally, all imaging tests done in acceptance are repeated in commissioning, resulting in about 8 hours of rework. The total time overlap between the two processes is about 30 hours. Conclusion: The process mapping done in this study shows that there are no tests done in acceptance testing that are not also recommended to do for commissioning. This results in about 30 hours of redundant work when preparing a conventional linear accelerator for clinical use. Considering these findings in the context of the 5000 linacs in the United states, consolidating acceptance testing and commissioning would have allowed for the treatment of an additional 25000 patients using no additional resources.
Makela, M.
2012-10-15
Traditional process industries in Finland and abroad are facing an emerging waste disposal problem due recent regulatory development which has increased the costs of landfill disposal and difficulty in acquiring new sites. For large manufacturers, such as the forest and ferrous metals industries, symbiotic cooperation of formerly separate industrial sectors could enable the utilisation waste-labeled residues in manufacturing novel residue-derived materials suitable for replacing commercial virgin alternatives. Such efforts would allow transforming the current linear resource use and disposal models to more cyclical ones and thus attain savings in valuable materials and energy resources. The work described in this thesis was aimed at utilising forest and carbon steel industry residues in the experimental manufacture of novel residue-derived materials technically and environmentally suitable for amending agricultural or forest soil properties. Single and sequential chemical extractions were used to compare the pseudo-total concentrations of trace elements in the manufactured amendment samples to relevant Finnish statutory limit values for the use of fertilizer products and to assess respective potential availability under natural conditions. In addition, the quality of analytical work and the suitability of sequential extraction in the analysis of an industrial solid sample were respectively evaluated through the analysis of a certified reference material and by X-ray diffraction of parallel sequential extraction residues. According to the acquired data, the incorporation of both forest and steel industry residues, such as fly ashes, lime wastes, green liquor dregs, sludges and slags, led to amendment liming capacities (34.9-38.3%, Ca equiv., d.w.) comparable to relevant commercial alternatives. Only the first experimental samples showed increased concentrations of pseudo-total cadmium and chromium, of which the latter was specified as the trivalent Cr(III). Based on
Evaluation Using Sequential Trials Methods.
Cohen, Mark E.; Ralls, Stephen A.
1986-01-01
Although dental school faculty as well as practitioners are interested in evaluating products and procedures used in clinical practice, research design and statistical analysis can sometimes pose problems. Sequential trials methods provide an analytical structure that is both easy to use and statistically valid. (Author/MLW)
Attack Trees with Sequential Conjunction
Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando
2015-01-01
We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of
Elise Cormie-Bowins
2012-10-01
Full Text Available We consider the problem of computing reachability probabilities: given a Markov chain, an initial state of the Markov chain, and a set of goal states of the Markov chain, what is the probability of reaching any of the goal states from the initial state? This problem can be reduced to solving a linear equation Ax = b for x, where A is a matrix and b is a vector. We consider two iterative methods to solve the linear equation: the Jacobi method and the biconjugate gradient stabilized (BiCGStab method. For both methods, a sequential and a parallel version have been implemented. The parallel versions have been implemented on the compute unified device architecture (CUDA so that they can be run on a NVIDIA graphics processing unit (GPU. From our experiments we conclude that as the size of the matrix increases, the CUDA implementations outperform the sequential implementations. Furthermore, the BiCGStab method performs better than the Jacobi method for dense matrices, whereas the Jacobi method does better for sparse ones. Since the reachability probabilities problem plays a key role in probabilistic model checking, we also compared the implementations for matrices obtained from a probabilistic model checker. Our experiments support the conjecture by Bosnacki et al. that the Jacobi method is superior to Krylov subspace methods, a class to which the BiCGStab method belongs, for probabilistic model checking.
Linearization and Segmentation in Discourse: Introduction to the Special Issue
Liesbeth Degand
2009-06-01
Full Text Available Like other forms of communication, language is inseparably tied to some kind of linear-sequential presentation, due to the linear-sequential nature of the media it operates on. Linearization in its turn presupposes segmentation, i.e. decisions concerning the size and type of units to be brought into a sequential order at various levels. In written and spoken language, for example, it has to be decided whether a piece of information can and should be realized as a word, a phrase, a clause, a (...
Natanael Antonio dos Santos
2002-01-01
Full Text Available O objetivo deste trabalho é discutir alguns aspectos conceituais básicos da análise de Fourier enquanto ferramenta que fundamenta a perspectiva de filtros ou canais múltiplos de freqüências espaciais no estudo do processamento visual da forma. Serão também discutidos alguns dos principais paradigmas psicofísicos utilizados para caracterizar a resposta do sistema visual humano para filtros de freqüências espaciais de banda estreita. A análise de sistema linear e alguns paradigmas psicofísicos têm contribuído para o desenvolvimento teórico da percepção e do processamento visual da forma.The goal of this work is to discuss some basic aspects of Fourier analysis as a tool to be used in the approach of multiple channels of spatial frequencies on the study of visual processing of form. Some of the psychophysical paradigms more frequently used to characterize response of the human visual system to spatial frequency filter of narrow-band. The linear system analysis and some psychophysical paradigms have contributed to theoretical development of perception and of the visual processing of form.
Quantum chromodynamics as the sequential fragmenting with inactivation
Botet, R.
1996-01-01
We investigate the relation between the modified leading log approximation of the perturbative QCD and the sequential binary fragmentation process. We will show that in the absence of inactivation, this process is equivalent to the QCD gluodynamics. The inactivation term yields a precise prescription of how to include the hadronization in the QCD equations. (authors)
Quantum chromodynamics as the sequential fragmenting with inactivation
Botet, R. [Paris-11 Univ., 91 - Orsay (France). Lab. de Physique des Solides; Ploszajczak, M. [Grand Accelerateur National d`Ions Lourds (GANIL), 14 - Caen (France)
1996-12-31
We investigate the relation between the modified leading log approximation of the perturbative QCD and the sequential binary fragmentation process. We will show that in the absence of inactivation, this process is equivalent to the QCD gluodynamics. The inactivation term yields a precise prescription of how to include the hadronization in the QCD equations. (authors). 15 refs.
Event-shape analysis: Sequential versus simultaneous multifragment emission
Cebra, D.A.; Howden, S.; Karn, J.; Nadasen, A.; Ogilvie, C.A.; Vander Molen, A.; Westfall, G.D.; Wilson, W.K.; Winfield, J.S.; Norbeck, E.
1990-01-01
The Michigan State University 4π array has been used to select central-impact-parameter events from the reaction 40 Ar+ 51 V at incident energies from 35 to 85 MeV/nucleon. The event shape in momentum space is an observable which is shown to be sensitive to the dynamics of the fragmentation process. A comparison of the experimental event-shape distribution to sequential- and simultaneous-decay predictions suggests that a transition in the breakup process may have occurred. At 35 MeV/nucleon, a sequential-decay simulation reproduces the data. For the higher energies, the experimental distributions fall between the two contrasting predictions
Human visual system automatically encodes sequential regularities of discrete events.
Kimura, Motohiro; Schröger, Erich; Czigler, István; Ohira, Hideki
2010-06-01
For our adaptive behavior in a dynamically changing environment, an essential task of the brain is to automatically encode sequential regularities inherent in the environment into a memory representation. Recent studies in neuroscience have suggested that sequential regularities embedded in discrete sensory events are automatically encoded into a memory representation at the level of the sensory system. This notion is largely supported by evidence from investigations using auditory mismatch negativity (auditory MMN), an event-related brain potential (ERP) correlate of an automatic memory-mismatch process in the auditory sensory system. However, it is still largely unclear whether or not this notion can be generalized to other sensory modalities. The purpose of the present study was to investigate the contribution of the visual sensory system to the automatic encoding of sequential regularities using visual mismatch negativity (visual MMN), an ERP correlate of an automatic memory-mismatch process in the visual sensory system. To this end, we conducted a sequential analysis of visual MMN in an oddball sequence consisting of infrequent deviant and frequent standard stimuli, and tested whether the underlying memory representation of visual MMN generation contains only a sensory memory trace of standard stimuli (trace-mismatch hypothesis) or whether it also contains sequential regularities extracted from the repetitive standard sequence (regularity-violation hypothesis). The results showed that visual MMN was elicited by first deviant (deviant stimuli following at least one standard stimulus), second deviant (deviant stimuli immediately following first deviant), and first standard (standard stimuli immediately following first deviant), but not by second standard (standard stimuli immediately following first standard). These results are consistent with the regularity-violation hypothesis, suggesting that the visual sensory system automatically encodes sequential
Sequential infiltration synthesis for advanced lithography
Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing
2017-10-10
A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.
Robustness of the Sequential Lineup Advantage
Gronlund, Scott D.; Carlson, Curt A.; Dailey, Sarah B.; Goodsell, Charles A.
2009-01-01
A growing movement in the United States and around the world involves promoting the advantages of conducting an eyewitness lineup in a sequential manner. We conducted a large study (N = 2,529) that included 24 comparisons of sequential versus simultaneous lineups. A liberal statistical criterion revealed only 2 significant sequential lineup…
Sequential Probability Ration Tests : Conservative and Robust
Kleijnen, J.P.C.; Shi, Wen
2017-01-01
In practice, most computers generate simulation outputs sequentially, so it is attractive to analyze these outputs through sequential statistical methods such as sequential probability ratio tests (SPRTs). We investigate several SPRTs for choosing between two hypothesized values for the mean output
Comparison of Sequential and Variational Data Assimilation
Alvarado Montero, Rodolfo; Schwanenberg, Dirk; Weerts, Albrecht
2017-04-01
Data assimilation is a valuable tool to improve model state estimates by combining measured observations with model simulations. It has recently gained significant attention due to its potential in using remote sensing products to improve operational hydrological forecasts and for reanalysis purposes. This has been supported by the application of sequential techniques such as the Ensemble Kalman Filter which require no additional features within the modeling process, i.e. it can use arbitrary black-box models. Alternatively, variational techniques rely on optimization algorithms to minimize a pre-defined objective function. This function describes the trade-off between the amount of noise introduced into the system and the mismatch between simulated and observed variables. While sequential techniques have been commonly applied to hydrological processes, variational techniques are seldom used. In our believe, this is mainly attributed to the required computation of first order sensitivities by algorithmic differentiation techniques and related model enhancements, but also to lack of comparison between both techniques. We contribute to filling this gap and present the results from the assimilation of streamflow data in two basins located in Germany and Canada. The assimilation introduces noise to precipitation and temperature to produce better initial estimates of an HBV model. The results are computed for a hindcast period and assessed using lead time performance metrics. The study concludes with a discussion of the main features of each technique and their advantages/disadvantages in hydrological applications.
Time scale of random sequential adsorption.
Erban, Radek; Chapman, S Jonathan
2007-04-01
A simple multiscale approach to the diffusion-driven adsorption from a solution to a solid surface is presented. The model combines two important features of the adsorption process: (i) The kinetics of the chemical reaction between adsorbing molecules and the surface and (ii) geometrical constraints on the surface made by molecules which are already adsorbed. The process (i) is modeled in a diffusion-driven context, i.e., the conditional probability of adsorbing a molecule provided that the molecule hits the surface is related to the macroscopic surface reaction rate. The geometrical constraint (ii) is modeled using random sequential adsorption (RSA), which is the sequential addition of molecules at random positions on a surface; one attempt to attach a molecule is made per one RSA simulation time step. By coupling RSA with the diffusion of molecules in the solution above the surface the RSA simulation time step is related to the real physical time. The method is illustrated on a model of chemisorption of reactive polymers to a virus surface.
Random sequential adsorption of cubes
Cieśla, Michał; Kubala, Piotr
2018-01-01
Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.
María Isabel López Rodríguez
2014-06-01
Full Text Available Las diferentes herramientas del Control Estadístico de Calidad proporcionan mejores resultados si su uso se lleva a cabo de manera secuencial, facilitando la detección de los puntos débiles del proceso productivo. Así, en el presente estudio se expone un método de trabajo en el que se utilizan, atendiendo a la necesidad en cada punto de la cadena de producción, algunas de dichas herramientas. Concretamente se propone el uso del diagrama de flujo, gráfico de Pareto, hojas de comprobación, gráficos de control (en este caso gráfico p y gráfico de media-recorrido, así como el análisis de la varianza (ANOVA. El procedimiento se aplica a una empresa del sector agroalimentario, interesada en dar una solución al problema derivado del elevado número de quejas por parte de sus clientes. Las primeras conclusiones, en las que se observa un porcentaje elevado de elementos defectuosos (superior al 30%, llevan a analizar las causas principales, la constatación de la capacidad del proceso y la cuantificación de la pérdida derivada de esta situación, mediante la función de pérdida de Taguchi. Para ello se realiza un estudio previo del descentramiento del proceso y del exceso de variabilidad que explicarían la falta de capacidad del mismo para fabricar según especificaciones.When making use of the different existing tools for Statistical Process Control (SPC, better results can be achieved if a sequential approach is used. This working methodology will provide easier ways to detect any weak point in the production process. Here one such methodology is presented, during the application of which the most appropriate statistical tools are used in each step of the production chain, depending on the needs arising for each of them. In the process shown in this paper, the proposed tools are the flowchart, Pareto chart, check sheet, two different types of control charts (in this case p–chart and –R chart, and finally the analysis of variance
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Maruthai Suresh
2010-10-01
Full Text Available A nonlinear process, the heat exchanger whose parameters vary with respect to the process variable, is considered. The time constant and gain of the chosen process vary as a function of temperature. The limitations of the conventional feedback controller tuned using Ziegler-Nichols settings for the chosen process are brought out. The servo and regulatory responses through simulation and experimentation for various magnitudes of set-point changes and load changes at various operating points with the controller tuned only at a chosen nominal operating point are obtained and analyzed. Regulatory responses for output load changes are studied. The efficiency of feedforward controller and the effects of modeling error have been brought out. An IMC based system is presented to understand clearly how variations of system parameters affect the performance of the controller. The present work illustrates the effectiveness of Feedforward and IMC controller.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Yamanaka, Tsuyuko; Raffaelli, David; White, Piran C L
2013-01-01
Sea-level rise induced by climate change may have significant impacts on the ecosystem functions and ecosystem services provided by intertidal sediment ecosystems. Accelerated sea-level rise is expected to lead to steeper beach slopes, coarser particle sizes and increased wave exposure, with consequent impacts on intertidal ecosystems. We examined the relationships between abundance, biomass, and community metabolism of benthic fauna with beach slope, particle size and exposure, using samples across a range of conditions from three different locations in the UK, to determine the significance of sediment particle size beach slope and wave exposure in affecting benthic fauna and ecosystem function in different ecological contexts. Our results show that abundance, biomass and oxygen consumption of intertidal macrofauna and meiofauna are affected significantly by interactions among sediment particle size, beach slope and wave exposure. For macrofauna on less sloping beaches, the effect of these physical constraints is mediated by the local context, although for meiofauna and for macrofauna on intermediate and steeper beaches, the effects of physical constraints dominate. Steeper beach slopes, coarser particle sizes and increased wave exposure generally result in decreases in abundance, biomass and oxygen consumption, but these relationships are complex and non-linear. Sea-level rise is likely to lead to changes in ecosystem structure with generally negative impacts on ecosystem functions and ecosystem services. However, the impacts of sea-level rise will also be affected by local ecological context, especially for less sloping beaches.
Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain
2013-12-30
There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity) that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain's processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV) analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer.
Rotsch, David A. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Brossard, Tom [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Roussin, Ethan [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Quigley, Kevin [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Chemerisov, Sergey [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Gromov, Roman [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Jonah, Charles [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Hafenrichter, Lohman [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Tkac, Peter [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Krebs, John [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Vandegrift, George F. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division
2016-10-31
Molybdenum-99, the mother of Tc-99m, can be produced from fission of U-235 in nuclear reactors and purified from fission products by the Cintichem process, later modified for low-enriched uranium (LEU) targets. The key step in this process is the precipitation of Mo with α-benzoin oxime (ABO). The stability of this complex to radiation has been examined. Molybdenum-ABO was irradiated with 3 MeV electrons produced by a Van de Graaff generator and 35 MeV electrons produced by a 50 MeV/25 kW electron linear accelerator. Dose equivalents of 1.7–31.2 kCi of Mo-99 were administered to freshly prepared Mo-ABO. Irradiated samples of Mo-ABO were processed according to the LEU Modified-Cintichem process. The Van de Graaff data indicated good radiation stability of the Mo-ABO complex up to ~15 kCi dose equivalents of Mo-99 and nearly complete destruction at doses >24 kCi Mo-99. The linear accelerator data indicate that even at 6.2 kCi of Mo-99 equivalence of dose, the sample lost ~20% of Mo-99. The 20% loss of Mo-99 at this low dose may be attributed to thermal decomposition of the product from the heat deposited in the sample during irradiation.
Stefanie Andrea Hutka
2013-12-01
Full Text Available There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain’s processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer.
Rogner, H.H.
1989-01-01
The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig
Native Frames: Disentangling Sequential from Concerted Three-Body Fragmentation
Rajput, Jyoti; Severt, T.; Berry, Ben; Jochim, Bethany; Feizollah, Peyman; Kaderiya, Balram; Zohrabi, M.; Ablikim, U.; Ziaee, Farzaneh; Raju P., Kanaka; Rolles, D.; Rudenko, A.; Carnes, K. D.; Esry, B. D.; Ben-Itzhak, I.
2018-03-01
A key question concerning the three-body fragmentation of polyatomic molecules is the distinction of sequential and concerted mechanisms, i.e., the stepwise or simultaneous cleavage of bonds. Using laser-driven fragmentation of OCS into O++C++S+ and employing coincidence momentum imaging, we demonstrate a novel method that enables the clear separation of sequential and concerted breakup. The separation is accomplished by analyzing the three-body fragmentation in the native frame associated with each step and taking advantage of the rotation of the intermediate molecular fragment, CO2 + or CS2 + , before its unimolecular dissociation. This native-frame method works for any projectile (electrons, ions, or photons), provides details on each step of the sequential breakup, and enables the retrieval of the relevant spectra for sequential and concerted breakup separately. Specifically, this allows the determination of the branching ratio of all these processes in OCS3 + breakup. Moreover, we find that the first step of sequential breakup is tightly aligned along the laser polarization and identify the likely electronic states of the intermediate dication that undergo unimolecular dissociation in the second step. Finally, the separated concerted breakup spectra show clearly that the central carbon atom is preferentially ejected perpendicular to the laser field.
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Esteban Moyano, Fernando; Vasilyeva, Nadezda; Menichetti, Lorenzo
2016-04-01
Soil carbon models developed over the last couple of decades are limited in their capacity to accurately predict the magnitudes and temporal variations in observed carbon fluxes and stocks. New process-based models are now emerging that attempt to address the shortcomings of their more simple, empirical counterparts. While a spectrum of ideas and hypothetical mechanisms are finding their way into new models, the addition of only a few processes known to significantly affect soil carbon (e.g. enzymatic decomposition, adsorption, Michaelis-Menten kinetics) has shown the potential to resolve a number of previous model-data discrepancies (e.g. priming, Birch effects). Through model-data validation, such models are a means of testing hypothetical mechanisms. In addition, they can lead to new insights into what soil carbon pools are and how they respond to external drivers. In this study we develop a model of soil carbon dynamics based on enzymatic decomposition and other key features of process based models, i.e. simulation of carbon in particulate, soluble and adsorbed states, as well as enzyme and microbial components. Here we focus on understanding how moisture affects C decomposition at different levels, both directly (e.g. by limiting diffusion) or through interactions with other components. As the medium where most reactions and transport take place, water is central en every aspect of soil C dynamics. We compare results from a number of alternative models with experimental data in order to test different processes and parameterizations. Among other observations, we try to understand: 1. typical moisture response curves and associated temporal changes, 2. moisture-temperature interactions, and 3. diffusion effects under changing C concentrations. While the model aims at being a process based approach and at simulating fluxes at short time scales, it remains a simplified representation using the same inputs as classical soil C models, and is thus potentially
Basal ganglia and cortical networks for sequential ordering and rhythm of complex movements
Jeffery G. Bednark
2015-07-01
Full Text Available Voluntary actions require the concurrent engagement and coordinated control of complex temporal (e.g. rhythm and ordinal motor processes. Using high-resolution functional magnetic resonance imaging (fMRI and multi-voxel pattern analysis (MVPA, we sought to determine the degree to which these complex motor processes are dissociable in basal ganglia and cortical networks. We employed three different finger-tapping tasks that differed in the demand on the sequential temporal rhythm or sequential ordering of submovements. Our results demonstrate that sequential rhythm and sequential order tasks were partially dissociable based on activation differences. The sequential rhythm task activated a widespread network centered around the SMA and basal-ganglia regions including the dorsomedial putamen and caudate nucleus, while the sequential order task preferentially activated a fronto-parietal network. There was also extensive overlap between sequential rhythm and sequential order tasks, with both tasks commonly activating bilateral premotor, supplementary motor, and superior/inferior parietal cortical regions, as well as regions of the caudate/putamen of the basal ganglia and the ventro-lateral thalamus. Importantly, within the cortical regions that were active for both complex movements, MVPA could accurately classify different patterns of activation for the sequential rhythm and sequential order tasks. In the basal ganglia, however, overlapping activation for the sequential rhythm and sequential order tasks, which was found in classic motor circuits of the putamen and ventro-lateral thalamus, could not be accurately differentiated by MVPA. Overall, our results highlight the convergent architecture of the motor system, where complex motor information that is spatially distributed in the cortex converges into a more compact representation in the basal ganglia.
What determines the impact of context on sequential action?
Ruitenberg, M.F.L.; Verwey, Willem B.; Abrahamse, E.L.
2015-01-01
In the current study we build on earlier observations that memory-based sequential action is better in the original learning context than in other contexts. We examined whether changes in the perceptual context have differential impact across distinct processing phases (preparation versus execution
Sequential infiltration synthesis for enhancing multiple-patterning lithography
Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih
2017-06-20
Simplified methods of multiple-patterning photolithography using sequential infiltration synthesis to modify the photoresist such that it withstands plasma etching better than unmodified resist and replaces one or more hard masks and/or a freezing step in MPL processes including litho-etch-litho-etch photolithography or litho-freeze-litho-etch photolithography.
A Bayesian sequential processor approach to spectroscopic portal system decisions
Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D
2007-07-31
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Ogorodnikov, I N; Isaenko, L I; Zinin, E I; Kruzhalov, A V
2000-01-01
The paper presents the results of a study of the LiB sub 3 O sub 5 and Li sub 2 B sub 4 O sub 7 crystals by the use of the luminescent spectroscopy with the sub-nanosecond time resolution under excitation of the high-power synchrotron radiation. The commonness in the origin of the non-equilibrium processes in these crystals as well as the observed differences in the luminescence manifestations is discussed.
Ogorodnikov, I.N. E-mail: ogo@dpt.ustu.ru; Pustovarov, V.A.; Isaenko, L.I.; Zinin, E.I.; Kruzhalov, A.V
2000-06-21
The paper presents the results of a study of the LiB{sub 3}O{sub 5} and Li{sub 2}B{sub 4}O{sub 7} crystals by the use of the luminescent spectroscopy with the sub-nanosecond time resolution under excitation of the high-power synchrotron radiation. The commonness in the origin of the non-equilibrium processes in these crystals as well as the observed differences in the luminescence manifestations is discussed.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
General Atomic HTGR fuel reprocessing pilot plant: results of initial sequential equipment operation
1978-09-01
In September 1977, the processing of 20 large high-temperature gas-cooled reactor (LHTGR) fuel elements was completed sequentially through the head-end cold pilot plant equipment. This report gives a brief description of the equipment and summarizes the results of the sequential operation of the pilot plant. 32 figures, 15 tables
Ding, Hai-Yan; Li, Gai-Ru; Yu, Ying-Ge; Guo, Wei; Zhi, Ling; Li, Xin-Xia
2014-04-01
A method for on-line monitoring the dissolution of Valsartan and hydrochlorothiazide tablets assisted by mathematical separation model of linear equations was established. UV spectrums of valsartan and hydrochlorothiazide were overlapping completely at the maximum absorption wavelength respectively. According to the Beer-Lambert principle of absorbance additivity, the absorptivity of Valsartan and hydrochlorothiazide was determined at the maximum absorption wavelength, and the dissolubility of Valsartan and hydrochlorothiazide tablets was detected by fiber-optic dissolution test (FODT) assisted by the mathematical separation model of linear equations and compared with the HPLC method. Results show that two ingredients were real-time determined simultaneously in given medium. There was no significant difference for FODT compared with HPLC (p > 0.05). Due to the dissolution behavior consistency, the preparation process of different batches was stable and with good uniformity. The dissolution curves of valsartan were faster and higher than hydrochlorothiazide. The dissolutions at 30 min of Valsartan and hydrochlorothiazide were concordant with US Pharmacopoeia. It was concluded that fiber-optic dissolution test system assisted by the mathematical separation model of linear equations that can detect the dissolubility of Valsartan and hydrochlorothiazide simultaneously, and get dissolution profiles and overall data, which can directly reflect the dissolution speed at each time. It can provide the basis for establishing standards of the drug. Compared to HPLC method with one-point data, there are obvious advantages to evaluate and analyze quality of sampling drug by FODT.
Oluleye, Gbemi; Smith, Robin
2016-01-01
Highlights: • MILP model developed for integration of waste heat recovery technologies in process sites. • Five thermodynamic cycles considered for exploitation of industrial waste heat. • Temperature and quantity of multiple waste heat sources considered. • Interactions with the site utility system considered. • Industrial case study presented to illustrate application of the proposed methodology. - Abstract: Thermodynamic cycles such as organic Rankine cycles, absorption chillers, absorption heat pumps, absorption heat transformers, and mechanical heat pumps are able to utilize wasted thermal energy in process sites for the generation of electrical power, chilling and heat at a higher temperature. In this work, a novel systematic framework is presented for optimal integration of these technologies in process sites. The framework is also used to assess the best design approach for integrating waste heat recovery technologies in process sites, i.e. stand-alone integration or a systems-oriented integration. The developed framework allows for: (1) selection of one or more waste heat sources (taking into account the temperatures and thermal energy content), (2) selection of one or more technology options and working fluids, (3) selection of end-uses of recovered energy, (4) exploitation of interactions with the existing site utility system and (5) the potential for heat recovery via heat exchange is also explored. The methodology is applied to an industrial case study. Results indicate a systems-oriented design approach reduces waste heat by 24%; fuel consumption by 54% and CO_2 emissions by 53% with a 2 year payback, and stand-alone design approach reduces waste heat by 12%; fuel consumption by 29% and CO_2 emissions by 20.5% with a 4 year payback. Therefore, benefits from waste heat utilization increase when interactions between the existing site utility system and the waste heat recovery technologies are explored simultaneously. The case study also shows
Moreno-Camacho, Carlos A.; Montoya-Torres, Jairo R.; Vélez-Gallego, Mario C.
2018-06-01
Only a few studies in the available scientific literature address the problem of having a group of workers that do not share identical levels of productivity during the planning horizon. This study considers a workforce scheduling problem in which the actual processing time is a function of the scheduling sequence to represent the decline in workers' performance, evaluating two classical performance measures separately: makespan and maximum tardiness. Several mathematical models are compared with each other to highlight the advantages of each approach. The mathematical models are tested with randomly generated instances available from a public e-library.
Sequential and parallel image restoration: neural network implementations.
Figueiredo, M T; Leitao, J N
1994-01-01
Sequential and parallel image restoration algorithms and their implementations on neural networks are proposed. For images degraded by linear blur and contaminated by additive white Gaussian noise, maximum a posteriori (MAP) estimation and regularization theory lead to the same high dimension convex optimization problem. The commonly adopted strategy (in using neural networks for image restoration) is to map the objective function of the optimization problem into the energy of a predefined network, taking advantage of its energy minimization properties. Departing from this approach, we propose neural implementations of iterative minimization algorithms which are first proved to converge. The developed schemes are based on modified Hopfield (1985) networks of graded elements, with both sequential and parallel updating schedules. An algorithm supported on a fully standard Hopfield network (binary elements and zero autoconnections) is also considered. Robustness with respect to finite numerical precision is studied, and examples with real images are presented.
Isaak, S.; Bull, S.; Pitter, M. C.; Harrison, Ian.
2011-05-01
This paper reports on the development of a SPAD device and its subsequent use in an actively quenched single photon counting imaging system, and was fabricated in a UMC 0.18 μm CMOS process. A low-doped p- guard ring (t-well layer) encircling the active area to prevent the premature reverse breakdown. The array is a 16×1 parallel output SPAD array, which comprises of an active quenched SPAD circuit in each pixel with the current value being set by an external resistor RRef = 300 kΩ. The SPAD I-V response, ID was found to slowly increase until VBD was reached at excess bias voltage, Ve = 11.03 V, and then rapidly increase due to avalanche multiplication. Digital circuitry to control the SPAD array and perform the necessary data processing was designed in VHDL and implemented on a FPGA chip. At room temperature, the dark count was found to be approximately 13 KHz for most of the 16 SPAD pixels and the dead time was estimated to be 40 ns.
Exploring the sequential lineup advantage using WITNESS.
Goodsell, Charles A; Gronlund, Scott D; Carlson, Curt A
2010-12-01
Advocates claim that the sequential lineup is an improvement over simultaneous lineup procedures, but no formal (quantitatively specified) explanation exists for why it is better. The computational model WITNESS (Clark, Appl Cogn Psychol 17:629-654, 2003) was used to develop theoretical explanations for the sequential lineup advantage. In its current form, WITNESS produced a sequential advantage only by pairing conservative sequential choosing with liberal simultaneous choosing. However, this combination failed to approximate four extant experiments that exhibited large sequential advantages. Two of these experiments became the focus of our efforts because the data were uncontaminated by likely suspect position effects. Decision-based and memory-based modifications to WITNESS approximated the data and produced a sequential advantage. The next step is to evaluate the proposed explanations and modify public policy recommendations accordingly.
Sequential cooling insert for turbine stator vane
Jones, Russel B
2017-04-04
A sequential flow cooling insert for a turbine stator vane of a small gas turbine engine, where the impingement cooling insert is formed as a single piece from a metal additive manufacturing process such as 3D metal printing, and where the insert includes a plurality of rows of radial extending impingement cooling air holes alternating with rows of radial extending return air holes on a pressure side wall, and where the insert includes a plurality of rows of chordwise extending second impingement cooling air holes on a suction side wall. The insert includes alternating rows of radial extending cooling air supply channels and return air channels that form a series of impingement cooling on the pressure side followed by the suction side of the insert.
Sequential Acral Lentiginous Melanomas of the Foot
Jiro Uehara
2010-12-01
Full Text Available A 64-year-old Japanese woman had a lightly brown-blackish pigmented macule (1.2 cm in diameter on the left sole of her foot. She received surgical excision following a diagnosis of acral lentiginous melanoma (ALM, which was confirmed histopathologically. One month after the operation, a second melanoma lesion was noticed adjacent to the grafted site. Histopathologically, the two lesions had no continuity, but HMB-45 and cyclin D1 double-positive cells were detected not only on aggregates of atypical melanocytes but also on single cells near the cutting edge of the first lesion. The unique occurrence of a sequential lesion of a primary melanoma might be caused by stimulated subclinical field cells during the wound healing process following the initial operation. This case warrants further investigation to establish the appropriate surgical margin of ALM lesions.
Sequential Therapy in Metastatic Renal Cell Carcinoma
Bradford R Hirsch
2016-04-01
Full Text Available The treatment of metastatic renal cell carcinoma (mRCC has changed dramatically in the past decade. As the number of available agents, and related volume of research, has grown, it is increasingly complex to know how to optimally treat patients. The authors are practicing medical oncologists at the US Oncology Network, the largest community-based network of oncology providers in the country, and represent the leadership of the Network's Genitourinary Research Committee. We outline our thought process in approaching sequential therapy of mRCC and the use of real-world data to inform our approach. We also highlight the evolving literature that will impact practicing oncologists in the near future.
Non-linear Capital Taxation Without Commitment
Emmanuel Farhi; Christopher Sleet; Iván Werning; Sevin Yeltekin
2012-01-01
We study efficient non-linear taxation of labour and capital in a dynamic Mirrleesian model incorporating political economy constraints. Policies are chosen sequentially over time, without commitment. Our main result is that the marginal tax on capital income is progressive, in the sense that richer agents face higher marginal tax rates. Copyright , Oxford University Press.
Petrică Andreea-Cristina
2017-07-01
Full Text Available Modeling exchange rate volatility became an important topic for research debate starting with 1973, when many countries switched to floating exchange rate system. In this paper, we focus on the EUR/RON exchange rate both as an economic measure and present the implied economic links, and also as a financial investment and analyze its movements and fluctuations through two volatility stochastic processes: the Standard Generalized Autoregressive Conditionally Heteroscedastic Model (GARCH and the Exponential Generalized Autoregressive Conditionally Heteroscedastic Model (EGARCH. The objective of the conditional variance processes is to capture dependency in the return series of the EUR/RON exchange rate. On this account, analyzing exchange rates could be seen as the input for economic decisions regarding Romanian macroeconomics - the exchange rates being influenced by many factors such as: interest rates, inflation, trading relationships with other countries (imports and exports, or investments - portfolio optimization, risk management, asset pricing. Therefore, we talk about political stability and economic performance of a country that represents a link between the two types of inputs mentioned above and influences both the macroeconomics and the investments. Based on time-varying volatility, we examine implied volatility of daily returns of EUR/RON exchange rate using the standard GARCH model and the asymmetric EGARCH model, whose parameters are estimated through the maximum likelihood method and the error terms follow two distributions (Normal and Student’s t. The empirical results show EGARCH(2,1 with Asymmetric order 2 and Student’s t error terms distribution performs better than all the estimated standard GARCH models (GARCH(1,1, GARCH(1,2, GARCH(2,1 and GARCH(2,2. This conclusion is supported by the major advantage of the EGARCH model compared to the GARCH model which consists in allowing good and bad news having different impact on the
Sequential lineup presentation: Patterns and policy
Lindsay, R C L; Mansour, Jamal K; Beaudry, J L; Leach, A-M; Bertrand, M I
2009-01-01
Sequential lineups were offered as an alternative to the traditional simultaneous lineup. Sequential lineups reduce incorrect lineup selections; however, the accompanying loss of correct identifications has resulted in controversy regarding adoption of the technique. We discuss the procedure and research relevant to (1) the pattern of results found using sequential versus simultaneous lineups; (2) reasons (theory) for differences in witness responses; (3) two methodological issues; and (4) im...
Linearization of CIF through SOS
Nadales Agut, D.E.; Reniers, M.A.; Luttik, B.; Valencia, F.
2011-01-01
Linearization is the procedure of rewriting a process term into a linear form, which consist only of basic operators of the process language. This procedure is interesting both from a theoretical and a practical point of view. In particular, a linearization algorithm is needed for the Compositional
Sequential decoders for large MIMO systems
Ali, Konpal S.
2014-05-01
Due to their ability to provide high data rates, multiple-input multiple-output (MIMO) systems have become increasingly popular. Decoding of these systems with acceptable error performance is computationally very demanding. In this paper, we employ the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity and vice versa for higher bias values. Numerical results are done that show moderate bias values result in a decent performance-complexity trade-off. We also attempt to bound the error by bounding the bias, using the minimum distance of a lattice. The variations in complexity with SNR have an interesting trend that shows room for considerable improvement. Our work is compared against linear decoders (LDs) aided with Element-based Lattice Reduction (ELR) and Complex Lenstra-Lenstra-Lovasz (CLLL) reduction. © 2014 IFIP.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Immediate Sequential Bilateral Cataract Surgery
Kessel, Line; Andresen, Jens; Erngaard, Ditte
2015-01-01
The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...
Blanco Rodriguez, P.; Vera Tome, F. [Natural Radioactivity Group. Universidad de Extremadura, 06071 Badajoz (Spain); Lozano, J.C. [Laboratorio de Radiactividad Ambiental. Universidad de Salamanca, 37008 Salamanca (Spain)
2014-07-01
Transfer from soil to plant is an important input of radionuclides into the food chain. Also, the mobility of radionuclides in soils is enhanced through their passage into the plant compartment. Thus, the soil-to-plant transfer of radionuclides raises the potential human dose. In radiological risk assessment models, this process is usually considered to be an equilibrium process such that the activity concentration in plants is linearly related to the soil concentration through a constant transfer factor (TF). However, the large variability present by measured TF values leads to major uncertainties in the assessment of risks. One possible way to reduce this variability in TF values is to parametrize their determination. This paper presents correlations of TF with the major element concentrations in soils. The findings confirm the major influence of the chemical environment of a soil on the assimilation process. The variability of TF might be greatly reduced if only the labile fraction were considered. Experiments performed with plants (Helianthus annuus L.) growing in a hydroponic medium appear to confirm this suggestion, showing a linear correlation between the plant and the soil solution activity concentrations. Extracting the labile fraction of a real soil is no trivial task, however. A possible operationally definable method is to consider the water-soluble together with the exchangeable fractions of the soil. Studies performed in granitic soils showed that the labile concentration of uranium and radium strongly depended on the soil's textural characteristics. In this sense, a parametrization is proposed of the labile uranium and radium concentration as a function of the soil's granulometric parameters. (authors)
Trial Sequential Methods for Meta-Analysis
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C
2008-01-01
As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.
Sequential lineup laps and eyewitness accuracy.
Steblay, Nancy K; Dietrich, Hannah L; Ryan, Shannon L; Raczynski, Jeanette L; James, Kali A
2011-08-01
Police practice of double-blind sequential lineups prompts a question about the efficacy of repeated viewings (laps) of the sequential lineup. Two laboratory experiments confirmed the presence of a sequential lap effect: an increase in witness lineup picks from first to second lap, when the culprit was a stranger. The second lap produced more errors than correct identifications. In Experiment 2, lineup diagnosticity was significantly higher for sequential lineup procedures that employed a single versus double laps. Witnesses who elected to view a second lap made significantly more errors than witnesses who chose to stop after one lap or those who were required to view two laps. Witnesses with prior exposure to the culprit did not exhibit a sequential lap effect.
Sequential Product of Quantum Effects: An Overview
Gudder, Stan
2010-12-01
This article presents an overview for the theory of sequential products of quantum effects. We first summarize some of the highlights of this relatively recent field of investigation and then provide some new results. We begin by discussing sequential effect algebras which are effect algebras endowed with a sequential product satisfying certain basic conditions. We then consider sequential products of (discrete) quantum measurements. We next treat transition effect matrices (TEMs) and their associated sequential product. A TEM is a matrix whose entries are effects and whose rows form quantum measurements. We show that TEMs can be employed for the study of quantum Markov chains. Finally, we prove some new results concerning TEMs and vector densities.
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
The sequential structure of brain activation predicts skill.
Anderson, John R; Bothell, Daniel; Fincham, Jon M; Moon, Jungaa
2016-01-29
In an fMRI study, participants were trained to play a complex video game. They were scanned early and then again after substantial practice. While better players showed greater activation in one region (right dorsal striatum) their relative skill was better diagnosed by considering the sequential structure of whole brain activation. Using a cognitive model that played this game, we extracted a characterization of the mental states that are involved in playing a game and the statistical structure of the transitions among these states. There was a strong correspondence between this measure of sequential structure and the skill of different players. Using multi-voxel pattern analysis, it was possible to recognize, with relatively high accuracy, the cognitive states participants were in during particular scans. We used the sequential structure of these activation-recognized states to predict the skill of individual players. These findings indicate that important features about information-processing strategies can be identified from a model-based analysis of the sequential structure of brain activation. Copyright © 2015 Elsevier Ltd. All rights reserved.
Standardized method for reproducing the sequential X-rays flap
Brenes, Alejandra; Molina, Katherine; Gudino, Sylvia
2009-01-01
A method is validated to estandardize in the taking, developing and analysis of bite-wing radiographs taken in sequential way, in order to compare and evaluate detectable changes in the evolution of the interproximal lesions through time. A radiographic positioner called XCP® is modified by means of a rigid acrylic guide, to achieve proper of the X ray equipment core positioning relative to the XCP® ring and the reorientation during the sequential x-rays process. 16 subjects of 4 to 40 years old are studied for a total number of 32 registries. Two x-rays of the same block of teeth of each subject have been taken in sequential way, with a minimal difference of 30 minutes between each one, before the placement of radiographic attachment. The images have been digitized with a Super Cam® scanner and imported to a software. The measurements in X and Y-axis for both x-rays were performed to proceed to compare. The intraclass correlation index (ICI) has shown that the proposed method is statistically related to measurement (mm) obtained in the X and Y-axis for both sequential series of x-rays (p=0.01). The measures of central tendency and dispersion have shown that the usual occurrence is indifferent between the two measurements (Mode 0.000 and S = 0083 and 0.109) and that the probability of occurrence of different values is lower than expected. (author) [es
von Helversen, Bettina; Mata, Rui
2012-12-01
We investigated the contribution of cognitive ability and affect to age differences in sequential decision making by asking younger and older adults to shop for items in a computerized sequential decision-making task. Older adults performed poorly compared to younger adults partly due to searching too few options. An analysis of the decision process with a formal model suggested that older adults set lower thresholds for accepting an option than younger participants. Further analyses suggested that positive affect, but not fluid abilities, was related to search in the sequential decision task. A second study that manipulated affect in younger adults supported the causal role of affect: Increased positive affect lowered the initial threshold for accepting an attractive option. In sum, our results suggest that positive affect is a key factor determining search in sequential decision making. Consequently, increased positive affect in older age may contribute to poorer sequential decisions by leading to insufficient search. 2013 APA, all rights reserved
Mo, Yun-Fei [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China); Liu, Rang-Su, E-mail: liurangsu@sina.com [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China); Tian, Ze-An; Liang, Yong-Chao [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China); Zhang, Hai-Tao [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China); Department of Electronic and Communication Engineering, Changsha University, Changsha 410003 (China); Hou, Zhao-Yang [Department of Applied Physics, Chang’an University, Xi’an 710064 (China); Liu, Hai-Rong [College of Materials Science and Engineering, Hunan University, Changsha 410082 (China); Zhang, Ai-long [College of Physics and Electronics, Hunan University of Arts and Science, Changde 415000 (China); Zhou, Li-Li [Department of Information Engineering, Gannan Medical University, Ganzhou 341000 (China); Peng, Ping [College of Materials Science and Engineering, Hunan University, Changsha 410082 (China); Xie, Zhong [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China)
2015-05-15
A MD simulation of liquid Cu{sub 46}Zr{sub 54} alloys has been performed for understanding the effects of initial melt temperatures on the microstructural evolution and mechanical properties during quenching process. By using several microstructural analyzing methods, it is found that the icosahedral and defective icosahedral clusters play a key role in the microstructure transition. All the final solidification structures obtained at different initial melt temperatures are of amorphous structures, and their structural and mechanical properties are non-linearly related to the initial melt temperatures, and fluctuated in a certain range. Especially, there exists a best initial melt temperature, from which the glass configuration possesses the highest packing density, the optimal elastic constants, and the smaller extent of structural softening under deforming.
Wang, Jun-Sheng; Yang, Guang-Hong
2017-07-25
This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.
Ai Ling Pang
2015-09-01
Full Text Available This study was conducted to evaluate the possibility of utilizing kenaf (KNF in LLDPE/PVOH to develop a new thermoplastic composite. The effect of KNF loading on the processability and mechanical, thermal and water absorption properties of linear low-density polyethylene/poly (vinyl alcohol/kenaf (LLDPE/PVOH/KNF composites were investigated. Composites with different KNF loadings (0, 10, 20, 30, and 40 phr were prepared using a Thermo Haake Polydrive internal mixer at a temperature of 150 °C and rotor speed of 50 rpm for 10 min. The results indicate that the stabilization torque, tensile modulus, water uptake, and thermal stability increased, while tensile strength and elongation at break decreased with increasing filler loading. The tensile fractured surfaces observed by scanning electron microscopy (SEM supported the deterioration in tensile properties of the LLDPE/PVOH/KNF composites with increasing KNF loading.
Polarization control of direct (non-sequential) two-photon double ionization of He
Pronin, E A; Manakov, N L; Marmo, S I; Starace, Anthony F
2007-01-01
An ab initio parametrization of the doubly-differential cross section (DDCS) for two-photon double ionization (TPDI) from an s 2 subshell of an atom in a 1 S 0 -state is presented. Analysis of the elliptic dichroism (ED) effect in the DDCS for TPDI of He and its comparison with the same effect in the concurrent process of sequential double ionization shows their qualitative and quantitative differences, thus providing a means to control and to distinguish sequential and non-sequential processes by measuring the relative ED parameter
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros; Jasra, Ajay; Law, Kody; Tempone, Raul; Zhou, Yan
2016-01-01
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Sequential Scintigraphy in Renal Transplantation
Winkel, K. zum; Harbst, H.; Schenck, P.; Franz, H. E.; Ritz, E.; Roehl, L.; Ziegler, M.; Ammann, W.; Maier-Borst, W. [Institut Fuer Nuklearmedizin, Deutsches Krebsforschungszentrum, Heidelberg, Federal Republic of Germany (Germany)
1969-05-15
Based on experience gained from more than 1600 patients with proved or suspected kidney diseases and on results on extended studies with dogs, sequential scintigraphy was performed after renal transplantation in dogs. After intravenous injection of 500 {mu}Ci. {sup 131}I-Hippuran scintiphotos were taken during the first minute with an exposure time of 15 sec each and thereafter with an exposure of 2 min up to at least 16 min.. Several examinations were evaluated digitally. 26 examinations were performed on 11 dogs with homotransplanted kidneys. Immediately after transplantation the renal function was almost normal arid the bladder was filled in due time. At the beginning of rejection the initial uptake of radioactive Hippuran was reduced. The intrarenal transport became delayed; probably the renal extraction rate decreased. Corresponding to the development of an oedema in the transplant the uptake area increased in size. In cases of thrombosis of the main artery there was no evidence of any uptake of radioactivity in the transplant. Similar results were obtained in 41 examinations on 15 persons. Patients with postoperative anuria due to acute tubular necrosis showed still some uptake of radioactivity contrary to those with thrombosis of the renal artery, where no uptake was found. In cases of rejection the most frequent signs were a reduced initial uptake and a delayed intrarenal transport of radioactive Hippuran. Infarction could be detected by a reduced uptake in distinct areas of the transplant. (author)
Sequential provisional implant prosthodontics therapy.
Zinner, Ira D; Markovits, Stanley; Jansen, Curtis E; Reid, Patrick E; Schnader, Yale E; Shapiro, Herbert J
2012-01-01
The fabrication and long-term use of first- and second-stage provisional implant prostheses is critical to create a favorable prognosis for function and esthetics of a fixed-implant supported prosthesis. The fixed metal and acrylic resin cemented first-stage prosthesis, as reviewed in Part I, is needed for prevention of adjacent and opposing tooth movement, pressure on the implant site as well as protection to avoid micromovement of the freshly placed implant body. The second-stage prosthesis, reviewed in Part II, should be used following implant uncovering and abutment installation. The patient wears this provisional prosthesis until maturation of the bone and healing of soft tissues. The second-stage provisional prosthesis is also a fail-safe mechanism for possible early implant failures and also can be used with late failures and/or for the necessity to repair the definitive prosthesis. In addition, the screw-retained provisional prosthesis is used if and when an implant requires removal or other implants are to be placed as in a sequential approach. The creation and use of both first- and second-stage provisional prostheses involve a restorative dentist, dental technician, surgeon, and patient to work as a team. If the dentist alone cannot do diagnosis and treatment planning, surgery, and laboratory techniques, he or she needs help by employing the expertise of a surgeon and a laboratory technician. This team approach is essential for optimum results.
Jong Won Kim
2017-01-01
Full Text Available Polyethylene is one of the most commonly used polymer materials. Even though linear low density polyethylene (LLDPE has better mechanical properties than other kinds of polyethylene, it is not used as a textile material because of its plastic behavior that is easy to break at the die during melt spinning. In this study, LLDPE fibers were successfully produced with a new approach using a dry-jet wet spinning and a heat drawing process. The fibers were filled with carbon nanotubes (CNTs to improve the strength and reduce plastic deformation. The crystallinity, degree of orientation, mechanical properties (strength to yield, strength to break, elongation at break, and initial modulus, electrical conductivity, and thermal properties of LLDPE fibers were studied. The results show that the addition of CNTs improved the tensile strength and the degree of crystallinity. The heat drawing process resulted in a significant increase in the tensile strength and the orientation of the CNTs and polymer chains. In addition, this study demonstrates that the heat drawing process effectively decreases the plastic deformation of LLDPE.
Zhang, Da; She, Jin; Yang, Jun; Yu, Mengsun
2015-06-01
Acute hypoxia activates several autonomic mechanisms, mainly in cardiovascular system and respiratory system. The influence of acute hypoxia on linear and nonlinear heart rate variability (HRV) has been studied, but the parameters in the process of hypoxia are still unclear. Although the changes of HRV in frequency domain are related to autonomic responses, how nonlinear dynamics change with the decrease of ambient atmospheric pressure is unknown either. Eight healthy male subjects were exposed to simulated altitude from sea level to 3600 m in 10 min. HRV parameters in frequency domain were analyzed by wavelet packet transform (Daubechies 4, 4 level) followed by Hilbert transform to assess the spectral power of modified low frequency (0.0625-0.1875 Hz, LFmod), modified high frequency (0.1875-0.4375 Hz, HFmod), and the LFmod/HFmod ratio in every 1 min. Nonlinear parameters were also quantified by sample entropy (SampEn) and short term fractal correlation exponent (α1) in the process. Hypoxia was associated with the depression of both LFmod and HFmod component. They were significantly lower than that at sea level at 3600 m and 2880 m respectively (both p nonlinear HRV parameters continuously in the process of hypoxia would be an effective way to evaluate the different regulatory mechanisms of autonomic nervous system.
Top-down attention affects sequential regularity representation in the human visual system.
Kimura, Motohiro; Widmann, Andreas; Schröger, Erich
2010-08-01
Recent neuroscience studies using visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in the visual sensory system, have shown that although sequential regularities embedded in successive visual stimuli can be automatically represented in the visual sensory system, an existence of sequential regularity itself does not guarantee that the sequential regularity will be automatically represented. In the present study, we investigated the effects of top-down attention on sequential regularity representation in the visual sensory system. Our results showed that a sequential regularity (SSSSD) embedded in a modified oddball sequence where infrequent deviant (D) and frequent standard stimuli (S) differing in luminance were regularly presented (SSSSDSSSSDSSSSD...) was represented in the visual sensory system only when participants attended the sequential regularity in luminance, but not when participants ignored the stimuli or simply attended the dimension of luminance per se. This suggests that top-down attention affects sequential regularity representation in the visual sensory system and that top-down attention is a prerequisite for particular sequential regularities to be represented. Copyright 2010 Elsevier B.V. All rights reserved.
Ferencik, Maros; Lisauskas, Jennifer B.; Cury, Ricardo C.; Hoffmann, Udo; Abbara, Suhny; Achenbach, Stephan; Karl, W. Clem; Brady, Thomas J.; Chan, Raymond C.
2006-01-01
Multi-detector computed tomography (MDCT) permits detection of coronary plaque. However, noise and blurring impair accuracy and precision of plaque measurements. The aim of the study was to evaluate MDCT post-processing based on non-linear image deblurring and edge-preserving noise suppression for measurements of plaque size. Contrast-enhanced MDCT coronary angiography was performed in four subjects (mean age 55 ± 5 years, mean heart rate 54 ± 5 bpm) using a 16-slice scanner (Siemens Sensation 16, collimation 16 x 0.75 mm, gantry rotation 420 ms, tube voltage 120 kV, tube current 550 mAs, 80 mL of contrast). Intravascular ultrasound (IVUS; 40 MHz probe) was performed in one vessel in each patient and served as a reference standard. MDCT vessel cross-sectional images (1 mm thickness) were created perpendicular to centerline and aligned with corresponding IVUS images. MDCT images were processed using a deblurring and edge-preserving noise suppression algorithm. Then, three independent blinded observers segmented lumen and outer vessel boundaries in each modality to obtain vessel cross-sectional area and wall area in the unprocessed MDCT cross-sections, post-processed MDCT cross-sections and corresponding IVUS. The wall area measurement difference for unprocessed and post-processed MDCT images relative to IVUS was 0.4 ± 3.8 mm 2 and -0.2 ± 2.2 mm 2 (p 2 , respectively. In conclusion, MDCT permitted accurate in vivo measurement of wall area and vessel cross-sectional area as compared to IVUS. Post-processing to reduce blurring and noise reduced variability of wall area measurements and reduced measurement bias for both wall area and vessel cross-sectional area
Groenwold, A.A.; Etman, L.F.P.
2008-01-01
We study the classical topology optimization problem, in which minimum compliance is sought, subject to linear constraints. Using a dual statement, we propose two separable and strictly convex subproblems for use in sequential approximate optimization (SAO) algorithms.Respectively, the subproblems
Bosansky, Branislav; Xin Jiang, Albert; Tambe, Milind
2015-01-01
representation of sequential strategies and linear programming, or by incremental strategy generation of iterative double-oracle methods. In this paper, we present novel hybrid of these two approaches: compact-strategy double-oracle (CS-DO) algorithm that combines the advantages of the compact representation...
Tradable permit allocations and sequential choice
MacKenzie, Ian A. [Centre for Economic Research, ETH Zuerich, Zurichbergstrasse 18, 8092 Zuerich (Switzerland)
2011-01-15
This paper investigates initial allocation choices in an international tradable pollution permit market. For two sovereign governments, we compare allocation choices that are either simultaneously or sequentially announced. We show sequential allocation announcements result in higher (lower) aggregate emissions when announcements are strategic substitutes (complements). Whether allocation announcements are strategic substitutes or complements depends on the relationship between the follower's damage function and governments' abatement costs. When the marginal damage function is relatively steep (flat), allocation announcements are strategic substitutes (complements). For quadratic abatement costs and damages, sequential announcements provide a higher level of aggregate emissions. (author)
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Taschek, Marco; Egermann, Jan; Schwarz, Sabrina; Leipertz, Alfred
2005-11-01
Optimum fuel preparation and mixture formation are core issues in the development of modern direct-injection (DI) Diesel engines, as these are crucial for defining the border conditions for the subsequent combustion and pollutant formation process. The local fuel/air ratio can be seen as one of the key parameters for this optimization process, as it allows the characterization and comparison of the mixture formation quality. For what is the first time to the best of our knowledge, linear Raman spectroscopy is used to detect the fuel/air ratio and its change along a line of a few millimeters directly and nonintrusively inside the combustion bowl of a DI Diesel engine. By a careful optimization of the measurement setup, the weak Raman signals could be separated successfully from disturbing interferences. A simultaneous measurement of the densities of air and fuel was possible along a line of about 10 mm length, allowing a time- and space-resolved measurement of the local fuel/air ratio. This could be performed in a nonreacting atmosphere as well as during fired operating conditions. The positioning of the measurement volume next to the interaction point of one of the spray jets with the wall of the combustion bowl allowed a near-wall analysis of the mixture formation process for a six-hole nozzle under varying injection and engine conditions. The results clearly show the influence of the nozzle geometry and preinjection on the mixing process. In contrast, modulation of the intake air temperature merely led to minor changes of the fuel concentration in the measurement volume.
Condorelli, Rosalia
2015-01-01
Using Census of India data from 1901 to 2011 and national and international reports on women's condition in India, beginning with sex ratio trends according to regional distribution up to female infanticides and sex-selective abortions and dowry deaths, this study examines the sociological aspects of the gender imbalance in modern contemporary India. Gender inequality persistence in India proves that new values and structures do not necessarily lead to the disappearance of older forms, but they can co-exist with mutual adaptations and reinforcements. Data analysis suggests that these unexpected combinations are not comprehensible in light of a linear concept of social change which is founded, in turn, on a concept of social systems as linear interaction systems that relate to environmental perturbations according to proportional cause and effect relationships. From this perspective, in fact, behavioral attitudes and interaction relationships should be less and less proportionally regulated by traditional values and practices as exposure to modernizing influences increases. And progressive decreases should be found in rates of social indicators of gender inequality like dowry deaths (the inverse should be found in sex ratio trends). However, data does not confirm these trends. This finding leads to emphasize a new theoretical and methodological approach toward social systems study, namely the conception of social systems as complex adaptive systems and the consequential emergentist, nonlinear conception of social change processes. Within the framework of emergentist theory of social change is it possible to understand the lasting strength of the patriarchal tradition and its problematic consequences in the modern contemporary India.
Shandiz, Mahdi Heravian; Khalilzadeh, Mohammadmahdi; Anvari, Kazem [Mashhad Branch, Islamic Azad University, Mashhad (Iran, Islamic Republic of); Layen, Ghorban Safaeian [Mashhad University of Medical Science, Mashhad (Iran, Islamic Republic of)
2015-03-15
In order to keep the acceptable level of the radiation oncology linear accelerators, it is necessary to apply a reliable quality assurance (QA) program. The QA protocols, published by authoritative organizations, such as the American Association of Physicists in Medicine (AAPM), determine the quality control (QC) tests which should be performed on the medical linear accelerators and the threshold levels for each test. The purpose of this study is to increase the accuracy and precision of the selected QC tests in order to increase the quality of treatment and also increase the speed of the tests to convince the crowded centers to start a reliable QA program. A new method has been developed for two of the QC tests; optical distance indicator (ODI) QC test as a daily test and gantry angle QC test as a monthly test. This method uses an image processing approach utilizing the snapshots taken by the CCD camera to measure the source to surface distance (SSD) and gantry angle. The new method of ODI QC test has an accuracy of 99.95% with a standard deviation of 0.061 cm and the new method for gantry angle QC has a precision of 0.43 degrees. The automated proposed method which is used for both ODI and gantry angle QC tests, contains highly accurate and precise results which are objective and the human-caused errors have no effect on the results. The results show that they are in the acceptable range for both of the QC tests, according to AAPM task group 142.
Shandiz, Mahdi Heravian; Khalilzadeh, Mohammadmahdi; Anvari, Kazem; Layen, Ghorban Safaeian
2015-01-01
In order to keep the acceptable level of the radiation oncology linear accelerators, it is necessary to apply a reliable quality assurance (QA) program. The QA protocols, published by authoritative organizations, such as the American Association of Physicists in Medicine (AAPM), determine the quality control (QC) tests which should be performed on the medical linear accelerators and the threshold levels for each test. The purpose of this study is to increase the accuracy and precision of the selected QC tests in order to increase the quality of treatment and also increase the speed of the tests to convince the crowded centers to start a reliable QA program. A new method has been developed for two of the QC tests; optical distance indicator (ODI) QC test as a daily test and gantry angle QC test as a monthly test. This method uses an image processing approach utilizing the snapshots taken by the CCD camera to measure the source to surface distance (SSD) and gantry angle. The new method of ODI QC test has an accuracy of 99.95% with a standard deviation of 0.061 cm and the new method for gantry angle QC has a precision of 0.43 degrees. The automated proposed method which is used for both ODI and gantry angle QC tests, contains highly accurate and precise results which are objective and the human-caused errors have no effect on the results. The results show that they are in the acceptable range for both of the QC tests, according to AAPM task group 142.
Accurately controlled sequential self-folding structures by polystyrene film
Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse
2017-08-01
Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.
Efficacy of premixed versus sequential administration of ...
sequential administration in separate syringes on block characteristics, haemodynamic parameters, side effect profile and postoperative analgesic requirement. Trial design: This was a prospective, randomised clinical study. Method: Sixty orthopaedic patients scheduled for elective lower limb surgery under spinal ...
Structural Consistency, Consistency, and Sequential Rationality.
Kreps, David M; Ramey, Garey
1987-01-01
Sequential equilibria comprise consistent beliefs and a sequentially ra tional strategy profile. Consistent beliefs are limits of Bayes ratio nal beliefs for sequences of strategies that approach the equilibrium strategy. Beliefs are structurally consistent if they are rationaliz ed by some single conjecture concerning opponents' strategies. Consis tent beliefs are not necessarily structurally consistent, notwithstan ding a claim by Kreps and Robert Wilson (1982). Moreover, the spirit of stru...
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Huihua Feng
2016-06-01
Full Text Available For a compression ignition (CI free piston engine linear generator (FPLG, injection timing is one of the most important parameters that affect its performance, especially for the one-stroke starting operation mode. In this paper, two injection control strategies are proposed using piston position and velocity signals. It was found experimentally that the injection timing’s influence on the compression ratio, the peak in-cylinder gas pressure and the indicated work (IW is different from that of traditional reciprocating CI engines. The maximum IW of the ignition starting cylinder, say left cylinder (LC and the right cylinder (RC are 132.7 J and 138.1 J, respectively. The thermal-dynamic model for simulating the working processes of the FPLG are built and verified by experimental results. The numerical simulation results show that the running instability and imbalance between LC and RC are the obvious characters when adopting the injection strategy of the velocity feedback. These could be solved by setting different triggering velocity thresholds for the two cylinders. The IW output from the FPLG under this strategy is higher than that of adopting the position feedback strategy, and the maximum IW of the RC could reach 162.3 J. Under this strategy, the prototype is able to achieve better starting conditions and could operate continuously for dozens of cycles.
M. Moravej
2016-02-01
Full Text Available Introduction: Studying the hydrological cycle, especially in large scales such as water catchments, is difficult and complicated despite the fact that the numbers of hydrological components are limited. This complexity rises from complex interactions between hydrological components and environment. Recognition, determination and modeling of all interactive processes are needed to address this issue, but it's not feasible for dealing with practical engineering problems. So, it is more convenient to consider hydrological components as stochastic phenomenon, and use stochastic models for modeling them. Stochastic simulation of time series models related to water resources, particularly hydrologic time series, have been widely used in recent decades in order to solve issues pertaining planning and management of water resource systems. In this study time series models fitted to the precipitation, evaporation and stream flow series separately and the relationships between stream flow and precipitation processes are investigated. In fact, the three mentioned processes should be modeled in parallel to each other in order to acquire a comprehensive vision of hydrological conditions in the region. Moreover, the relationship between the hydrologic processes has been mostly studied with respect to their trends. It is desirable to investigate the relationship between trends of hydrological processes and climate change, while the relationship of the models has not been taken into consideration. The main objective of this study is to investigate the relationship between hydrological processes and their effects on each other and the selected models. Material and Method: In the current study, the four sub-basins of Lake Urmia Basin namely Zolachay (A, Nazloochay (B, Shahrchay (C and Barandoozchay (D were considered. Precipitation, evaporation and stream flow time series were modeled by linear time series. Fundamental assumptions of time series analysis namely
Sequential formation of subgroups in OB associations
Elmegreen, B.G.; Lada, C.J.
1977-01-01
We reconsider the structure and formation of OB association in view of recent radio and infrared observations of the adjacent molecular clouds. As a result of this reexamination, we propose that OB subgroups are formed in a step-by-step process which involves the propagation of ionization (I) and shock (S) fronts through a molecular cloud complex. OB stars formed at the edge of a molecular cloud drive these I-S fronts into the cloud. A layer of dense neutral material accumulates between the I and S fronts and eventually becomes gravitationally unstable. This process is analyzed in detail. Several arguments concerning the temperature and mass of this layer suggest that a new OB subgroup will form. After approximately one-half million years, these stars will emerge from and disrupt the star-forming layer. A new shock will be driven into the remaining molecular cloud and will initiate another cycle of star formation.Several observed properties of OB associations are shown to follow from a sequential star-forming mechanism. These include the spatial separation and systematic differences in age of OB subgroups in a given association, the regularity of subgroup masses, the alignment of subgroups along the galactic plane, and their physical expansion. Detailed observations of ionization fronts, masers, IR sources, and molecular clouds are also in agreement with this model. Finally, this mechanism provides a means of dissipating a molecular cloud and exposing less massive stars (e.g., T Tauri stars) which may have formed ahead of the shock as part of the original cloud collapsed and fragmented
Olver, Peter J
2018-01-01
This textbook develops the essential tools of linear algebra, with the goal of imparting technique alongside contextual understanding. Applications go hand-in-hand with theory, each reinforcing and explaining the other. This approach encourages students to develop not only the technical proficiency needed to go on to further study, but an appreciation for when, why, and how the tools of linear algebra can be used across modern applied mathematics. Providing an extensive treatment of essential topics such as Gaussian elimination, inner products and norms, and eigenvalues and singular values, this text can be used for an in-depth first course, or an application-driven second course in linear algebra. In this second edition, applications have been updated and expanded to include numerical methods, dynamical systems, data analysis, and signal processing, while the pedagogical flow of the core material has been improved. Throughout, the text emphasizes the conceptual connections between each application and the un...
Wakamiya, Eiji; Okumura, Tomohito; Nakanishi, Makoto; Takeshita, Takashi; Mizuta, Mekumi; Kurimoto, Naoko; Tamai, Hiroshi
2011-06-01
To clarify whether rapid naming ability itself is a main underpinning factor of rapid automatized naming tests (RAN) and how deep an influence the discrete decoding process has on reading, we performed discrete naming tasks and discrete hiragana reading tasks as well as sequential naming tasks and sequential hiragana reading tasks with 38 Japanese schoolchildren with reading difficulty. There were high correlations between both discrete and sequential hiragana reading and sentence reading, suggesting that some mechanism which automatizes hiragana reading makes sentence reading fluent. In object and color tasks, there were moderate correlations between sentence reading and sequential naming, and between sequential naming and discrete naming. But no correlation was found between reading tasks and discrete naming tasks. The influence of rapid naming ability of objects and colors upon reading seemed relatively small, and multi-item processing may work in relation to these. In contrast, in the digit naming task there was moderate correlation between sentence reading and discrete naming, while no correlation was seen between sequential naming and discrete naming. There was moderate correlation between reading tasks and sequential digit naming tasks. Digit rapid naming ability has more direct effect on reading while its effect on RAN is relatively limited. The ratio of how rapid naming ability influences RAN and reading seems to vary according to kind of the stimuli used. An assumption about components in RAN which influence reading is discussed in the context of both sequential processing and discrete naming speed. Copyright © 2010 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Willson, Victor L.; And Others
1985-01-01
Presents results of confirmatory factor analysis of the Kaufman Assessment Battery for children which is based on the underlying theoretical model of sequential, simultaneous, and achievement factors. Found support for the two-factor, simultaneous and sequential processing model. (MCF)
Group sequential designs for stepped-wedge cluster randomised trials.
Grayling, Michael J; Wason, James Ms; Mander, Adrian P
2017-10-01
The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into
Human visual system automatically represents large-scale sequential regularities.
Kimura, Motohiro; Widmann, Andreas; Schröger, Erich
2010-03-04
Our brain recordings reveal that large-scale sequential regularities defined across non-adjacent stimuli can be automatically represented in visual sensory memory. To show that, we adopted an auditory paradigm developed by Sussman, E., Ritter, W., and Vaughan, H. G. Jr. (1998). Predictability of stimulus deviance and the mismatch negativity. NeuroReport, 9, 4167-4170, Sussman, E., and Gumenyuk, V. (2005). Organization of sequential sounds in auditory memory. NeuroReport, 16, 1519-1523 to the visual domain by presenting task-irrelevant infrequent luminance-deviant stimuli (D, 20%) inserted among task-irrelevant frequent stimuli being of standard luminance (S, 80%) in randomized (randomized condition, SSSDSSSSSDSSSSD...) and fixed manners (fixed condition, SSSSDSSSSDSSSSD...). Comparing the visual mismatch negativity (visual MMN), an event-related brain potential (ERP) index of memory-mismatch processes in human visual sensory system, revealed that visual MMN elicited by deviant stimuli was reduced in the fixed compared to the randomized condition. Thus, the large-scale sequential regularity being present in the fixed condition (SSSSD) must have been represented in visual sensory memory. Interestingly, this effect did not occur in conditions with stimulus-onset asynchronies (SOAs) of 480 and 800 ms but was confined to the 160-ms SOA condition supporting the hypothesis that large-scale regularity extraction was based on perceptual grouping of the five successive stimuli defining the regularity. 2010 Elsevier B.V. All rights reserved.
Structural and Functional Impacts of ER Coactivator Sequential Recruitment.
Yi, Ping; Wang, Zhao; Feng, Qin; Chou, Chao-Kai; Pintilie, Grigore D; Shen, Hong; Foulds, Charles E; Fan, Guizhen; Serysheva, Irina; Ludtke, Steven J; Schmid, Michael F; Hung, Mien-Chie; Chiu, Wah; O'Malley, Bert W
2017-09-07
Nuclear receptors recruit multiple coactivators sequentially to activate transcription. This "ordered" recruitment allows different coactivator activities to engage the nuclear receptor complex at different steps of transcription. Estrogen receptor (ER) recruits steroid receptor coactivator-3 (SRC-3) primary coactivator and secondary coactivators, p300/CBP and CARM1. CARM1 recruitment lags behind the binding of SRC-3 and p300 to ER. Combining cryo-electron microscopy (cryo-EM) structure analysis and biochemical approaches, we demonstrate that there is a close crosstalk between early- and late-recruited coactivators. The sequential recruitment of CARM1 not only adds a protein arginine methyltransferase activity to the ER-coactivator complex, it also alters the structural organization of the pre-existing ERE/ERα/SRC-3/p300 complex. It induces a p300 conformational change and significantly increases p300 HAT activity on histone H3K18 residues, which, in turn, promotes CARM1 methylation activity on H3R17 residues to enhance transcriptional activity. This study reveals a structural role for a coactivator sequential recruitment and biochemical process in ER-mediated transcription. Copyright © 2017 Elsevier Inc. All rights reserved.
Sequential extraction applied to Peruibe black mud, SP, Brazil
Torrecilha, Jefferson Koyaishi
2014-01-01
The Peruibe Black mud is used in therapeutic treatments such as psoriasis, peripheral dermatitis, acne and seborrhoea, as well as in the treatment of myalgia, arthritis, rheumatism and non-articular processes. Likewise other medicinal clays, it may not be free from possible adverse health effects due to possible hazardous minerals leading to respiratory system occurrences and other effects, caused by the presence of toxic elements. Once used for therapeutic purposes, any given material should be fully characterized and thus samples of Peruibe black mud were analyzed to determine physical and chemical properties: moisture content, organic matter and loss on ignition; pH, particle size, cation exchange capacity and swelling index. The elemental composition was determined by Neutron Activation Analysis, Atomic Absorption Graphite Furnace and X-ray fluorescence; the mineralogical composition was determined by X-ray diffraction. Another tool widely used to evaluate the behavior of trace elements, in various environmental matrices, is the sequential extraction. Thus, a sequential extraction procedure was applied to fractionate the mud in specific geochemical forms and verify how and how much of the elements may be contained in it. Considering the several sequential extraction procedures, BCR-701 method (Community Bureau of Reference) was used since it is considered the most reproducible among them. A simple extraction with an artificial sweat was, also, applied in order to verify which components are potentially available for absorption by the patient skin during the topical treatment. The results indicated that the mud is basically composed by a silty-clay material, rich in organic matter and with good cation exchange capacity. There were no significant variations in mineralogy and elemental composition of both, in natura and mature mud forms. The analysis by sequential extraction and by simple extraction indicated that the elements possibly available in larger
Blyth, T S
2002-01-01
Most of the introductory courses on linear algebra develop the basic theory of finite dimensional vector spaces, and in so doing relate the notion of a linear mapping to that of a matrix. Generally speaking, such courses culminate in the diagonalisation of certain matrices and the application of this process to various situations. Such is the case, for example, in our previous SUMS volume Basic Linear Algebra. The present text is a continuation of that volume, and has the objective of introducing the reader to more advanced properties of vector spaces and linear mappings, and consequently of matrices. For readers who are not familiar with the contents of Basic Linear Algebra we provide an introductory chapter that consists of a compact summary of the prerequisites for the present volume. In order to consolidate the student's understanding we have included a large num ber of illustrative and worked examples, as well as many exercises that are strategi cally placed throughout the text. Solutions to the ex...
Sequential dependencies in magnitude scaling of loudness
Joshi, Suyash Narendra; Jesteadt, Walt
2013-01-01
Ten normally hearing listeners used a programmable sone-potentiometer knob to adjust the level of a 1000-Hz sinusoid to match the loudness of numbers presented to them in a magnitude production task. Three different power-law exponents (0.15, 0.30, and 0.60) and a log-law with equal steps in d......B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...
Louise A. Brown
2016-10-01
Full Text Available Working memory is vulnerable to age-related decline, but there is debate regarding the age-sensitivity of different forms of spatial-sequential working memory task, depending on their passive or active nature. The functional architecture of spatial working memory was therefore explored in younger (18-40 years and older (64-85 years adults, using passive and active recall tasks. Spatial working memory was assessed using a modified version of the Spatial Span subtest of the Wechsler Memory Scale – Third Edition (WMS-III; Wechsler, 1998. Across both age groups, the effects of interference (control, visual, or spatial, and recall type (forward and backward, were investigated. There was a clear effect of age group, with younger adults demonstrating a larger spatial working memory capacity than the older adults overall. There was also a specific effect of interference, with the spatial interference task (spatial tapping reliably reducing performance relative to both the control and visual interference (dynamic visual noise conditions in both age groups and both recall types. This suggests that younger and older adults have similar dependence upon active spatial rehearsal, and that both forward and backward recall require this processing capacity. Linear regression analyses were then carried out within each age group, to assess the predictors of performance in each recall format (forward and backward. Specifically the backward recall task was significantly predicted by age, within both the younger and older adult groups. This finding supports previous literature showing lifespan linear declines in spatial-sequential working memory, and in working memory tasks from other domains, but contrasts with previous evidence that backward spatial span is no more sensitive to aging than forward span. The study suggests that backward spatial span is indeed more processing-intensive than forward span, even when both tasks include a retention period, and that age
Brown, Louise A.
2016-01-01
Working memory is vulnerable to age-related decline, but there is debate regarding the age-sensitivity of different forms of spatial-sequential working memory task, depending on their passive or active nature. The functional architecture of spatial working memory was therefore explored in younger (18–40 years) and older (64–85 years) adults, using passive and active recall tasks. Spatial working memory was assessed using a modified version of the Spatial Span subtest of the Wechsler Memory Scale – Third Edition (WMS-III; Wechsler, 1998). Across both age groups, the effects of interference (control, visual, or spatial), and recall type (forward and backward), were investigated. There was a clear effect of age group, with younger adults demonstrating a larger spatial working memory capacity than the older adults overall. There was also a specific effect of interference, with the spatial interference task (spatial tapping) reliably reducing performance relative to both the control and visual interference (dynamic visual noise) conditions in both age groups and both recall types. This suggests that younger and older adults have similar dependence upon active spatial rehearsal, and that both forward and backward recall require this processing capacity. Linear regression analyses were then carried out within each age group, to assess the predictors of performance in each recall format (forward and backward). Specifically the backward recall task was significantly predicted by age, within both the younger and older adult groups. This finding supports previous literature showing lifespan linear declines in spatial-sequential working memory, and in working memory tasks from other domains, but contrasts with previous evidence that backward spatial span is no more sensitive to aging than forward span. The study suggests that backward spatial span is indeed more processing-intensive than forward span, even when both tasks include a retention period, and that age predicts
Brown, Louise A
2016-01-01
Working memory is vulnerable to age-related decline, but there is debate regarding the age-sensitivity of different forms of spatial-sequential working memory task, depending on their passive or active nature. The functional architecture of spatial working memory was therefore explored in younger (18-40 years) and older (64-85 years) adults, using passive and active recall tasks. Spatial working memory was assessed using a modified version of the Spatial Span subtest of the Wechsler Memory Scale - Third Edition (WMS-III; Wechsler, 1998). Across both age groups, the effects of interference (control, visual, or spatial), and recall type (forward and backward), were investigated. There was a clear effect of age group, with younger adults demonstrating a larger spatial working memory capacity than the older adults overall. There was also a specific effect of interference, with the spatial interference task (spatial tapping) reliably reducing performance relative to both the control and visual interference (dynamic visual noise) conditions in both age groups and both recall types. This suggests that younger and older adults have similar dependence upon active spatial rehearsal, and that both forward and backward recall require this processing capacity. Linear regression analyses were then carried out within each age group, to assess the predictors of performance in each recall format (forward and backward). Specifically the backward recall task was significantly predicted by age, within both the younger and older adult groups. This finding supports previous literature showing lifespan linear declines in spatial-sequential working memory, and in working memory tasks from other domains, but contrasts with previous evidence that backward spatial span is no more sensitive to aging than forward span. The study suggests that backward spatial span is indeed more processing-intensive than forward span, even when both tasks include a retention period, and that age predicts
District heating in sequential energy supply
Persson, Urban; Werner, Sven
2012-01-01
Highlights: ► European excess heat recovery and utilisation by district heat distribution. ► Heat recovery in district heating systems – a structural energy efficiency measure. ► Introduction of new theoretical concepts to express excess heat recovery. ► Fourfold potential for excess heat utilisation in EU27 compared to current levels. ► Large scale excess heat recovery – a collaborative challenge for future Europe. -- Abstract: Increased recovery of excess heat from thermal power generation and industrial processes has great potential to reduce primary energy demands in EU27. In this study, current excess heat utilisation levels by means of district heat distribution are assessed and expressed by concepts such as recovery efficiency, heat recovery rate, and heat utilisation rate. For two chosen excess heat activities, current average EU27 heat recovery levels are compared to currently best Member State practices, whereby future potentials of European excess heat recovery and utilisation are estimated. The principle of sequential energy supply is elaborated to capture the conceptual idea of excess heat recovery in district heating systems as a structural and organisational energy efficiency measure. The general conditions discussed concerning expansion of heat recovery into district heating systems include infrastructure investments in district heating networks, collaboration agreements, maintained value chains, policy support, world market energy prices, allocation of synergy benefits, and local initiatives. The main conclusion from this study is that a future fourfold increase of current EU27 excess heat utilisation by means of district heat distribution to residential and service sectors is conceived as plausible if applying best Member State practice. This estimation is higher than the threefold increase with respect to direct feasible distribution costs estimated by the same authors in a previous study. Hence, no direct barriers appear with
Imitation of the sequential structure of actions by chimpanzees (Pan troglodytes).
Whiten, A
1998-09-01
Imitation was studied experimentally by allowing chimpanzees (Pan troglodytes) to observe alternative patterns of actions for opening a specially designed "artificial fruit." Like problematic foods primates deal with naturally, with the test fruit several defenses had to be removed to gain access to an edible core, but the sequential order and method of defense removal could be systematically varied. Each subject repeatedly observed 1 of 2 alternative techniques for removing each defense and 1 of 2 alternative sequential patterns of defense removal. Imitation of sequential organization emerged after repeated cycles of demonstration and attempts at opening the fruit. Imitation in chimpanzees may thus have some power to produce cultural convergence, counter to the supposition that individual learning processes corrupt copied actions. Imitation of sequential organization was accompanied by imitation of some aspects of the techniques that made up the sequence.
Dihydroazulene photoswitch operating in sequential tunneling regime
Broman, Søren Lindbæk; Lara-Avila, Samuel; Thisted, Christine Lindbjerg
2012-01-01
to electrodes so that the electron transport goes by sequential tunneling. To assure weak coupling, the DHA switching kernel is modified by incorporating p-MeSC6H4 end-groups. Molecules are prepared by Suzuki cross-couplings on suitable halogenated derivatives of DHA. The synthesis presents an expansion of our......, incorporating a p-MeSC6H4 anchoring group in one end, has been placed in a silver nanogap. Conductance measurements justify that transport through both DHA (high resistivity) and VHF (low resistivity) forms goes by sequential tunneling. The switching is fairly reversible and reenterable; after more than 20 ON...
Asynchronous Operators of Sequential Logic Venjunction & Sequention
Vasyukevich, Vadim
2011-01-01
This book is dedicated to new mathematical instruments assigned for logical modeling of the memory of digital devices. The case in point is logic-dynamical operation named venjunction and venjunctive function as well as sequention and sequentional function. Venjunction and sequention operate within the framework of sequential logic. In a form of the corresponding equations, they organically fit analytical expressions of Boolean algebra. Thus, a sort of symbiosis is formed using elements of asynchronous sequential logic on the one hand and combinational logic on the other hand. So, asynchronous
Sequential and simultaneous multiple explanation
Robert Litchfield
2007-02-01
Full Text Available This paper reports two experiments comparing variants of multiple explanation applied in the early stages of a judgment task (a case involving employee theft where participants are not given a menu of response options. Because prior research has focused on situations where response options are provided to judges, we identify relevant dependent variables that an intervention might affect when such options are not given. We use these variables to build a causal model of intervention that illustrates both the intended effects of multiple explanation and some potentially competing processes that it may trigger. Although multiple explanation clearly conveys some benefits (e.g., willingness to delay action to engage in information search, increased detail, quality and confidence in alternative explanations in the present experiments, we also found evidence that it may initiate or enhance processes that attenuate its advantages (e.g., feelings that one does not need more data if one has multiple good explanations.
The bacterial sequential Markov coalescent
De Maio, N; Wilson, DJ
2017-01-01
Bacteria can exchange and acquire new genetic material from other organisms directly and via the environment. This process, known as bacterial recombination, has a strong impact on the evolution of bacteria, for example leading to the spread of antibiotic resistance across clades and species, and to the avoidance of clonal interference. Recombination hinders phylogenetic and transmission inference because it creates patterns of substitutions that are not consistent with the hypothesis of a si...
Hilde Van Parijs
2014-01-01
Full Text Available Background. Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. Methods. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. Results. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001. There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04. The dose to the organs at risk (OAR was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. Conclusions. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine.
Van Parijs, Hilde; Reynders, Truus; Heuninckx, Karina; Verellen, Dirk; Storme, Guy; De Ridder, Mark
2014-01-01
Breast conserving surgery followed by whole breast irradiation is widely accepted as standard of care for early breast cancer. Addition of a boost dose to the initial tumor area further reduces local recurrences. We investigated the dosimetric benefits of a simultaneously integrated boost (SIB) compared to a sequential boost to hypofractionate the boost volume, while maintaining normofractionation on the breast. For 10 patients 4 treatment plans were deployed, 1 with a sequential photon boost, and 3 with different SIB techniques: on a conventional linear accelerator, helical TomoTherapy, and static TomoDirect. Dosimetric comparison was performed. PTV-coverage was good in all techniques. Conformity was better with all SIB techniques compared to sequential boost (P = 0.0001). There was less dose spilling to the ipsilateral breast outside the PTVboost (P = 0.04). The dose to the organs at risk (OAR) was not influenced by SIB compared to sequential boost. Helical TomoTherapy showed a higher mean dose to the contralateral breast, but less than 5 Gy for each patient. SIB showed less dose spilling within the breast and equal dose to OAR compared to sequential boost. Both helical TomoTherapy and the conventional technique delivered acceptable dosimetry. SIB seems a safe alternative and can be implemented in clinical routine.
Sequential Service Restoration for Unbalanced Distribution Systems and Microgrids
Chen, Bo; Chen, Chen; Wang, Jianhui; Butler-Purry, Karen L.
2017-01-01
The resilience and reliability of modern power systems are threatened by increasingly severe weather events and cyber-physical security events. An effective restoration methodology is desired to optimally integrate emerging smart grid technologies and pave the way for developing self-healing smart grids. In this paper, a sequential service restoration (SSR) framework is proposed to generate restoration solutions for distribution systems and microgrids in the event of large-scale power outages. The restoration solution contains a sequence of control actions that properly coordinate switches, distributed generators, and switchable loads to form multiple isolated microgrids. The SSR can be applied for three-phase unbalanced distribution systems and microgrids and can adapt to various operation conditions. Mathematical models are introduced for three-phase unbalanced power flow, voltage regulators, transformers, and loads. Furthermore, the SSR problem is formulated as a mixed-integer linear programming model, and its effectiveness is evaluated via the modified IEEE 123 node test feeder.
Dobolyi, David G; Dodson, Chad S
2013-12-01
Confidence judgments for eyewitness identifications play an integral role in determining guilt during legal proceedings. Past research has shown that confidence in positive identifications is strongly associated with accuracy. Using a standard lineup recognition paradigm, we investigated accuracy using signal detection and ROC analyses, along with the tendency to choose a face with both simultaneous and sequential lineups. We replicated past findings of reduced rates of choosing with sequential as compared to simultaneous lineups, but notably found an accuracy advantage in favor of simultaneous lineups. Moreover, our analysis of the confidence-accuracy relationship revealed two key findings. First, we observed a sequential mistaken identification overconfidence effect: despite an overall reduction in false alarms, confidence for false alarms that did occur was higher with sequential lineups than with simultaneous lineups, with no differences in confidence for correct identifications. This sequential mistaken identification overconfidence effect is an expected byproduct of the use of a more conservative identification criterion with sequential than with simultaneous lineups. Second, we found a steady drop in confidence for mistaken identifications (i.e., foil identifications and false alarms) from the first to the last face in sequential lineups, whereas confidence in and accuracy of correct identifications remained relatively stable. Overall, we observed that sequential lineups are both less accurate and produce higher confidence false identifications than do simultaneous lineups. Given the increasing prominence of sequential lineups in our legal system, our data argue for increased scrutiny and possibly a wholesale reevaluation of this lineup format. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
Aoki, Y.; Kunori, S.; Nagano, K.; Toba, Y.; Yagi, K.
1981-01-01
Differential cross sections and vector analyzing powers for 14 N(p, p') and 14 N(p, d) reactions have been measured at E sub(p) = 21.0 MeV to elucidate the reaction mechanism and the effective interaction for the ΔS = ΔT = 1 transition in 14 N(p, p') 14 N(2.31 MeV) reaction. The data are analyzed in terms of finite-range distorted wave Borm approximation (DWBA) which include direct, knock-on exchange and (p, d)(d, p') two-step processes. Shell model wave functions of Cohen and Kurath are used. The data for the first excited state is reasonably well explained by introducing two-step process. The two-step process explains half of the experimental intensity. Moreover vector analyzing power can hardly be explained without introducing this two-step process. Vector analyzing power of protons leading to the second excited state in 14 N is better explained by introducing macroscopic calculation. The data for 14 N(p, d) 13 N(gs) reaction are well explained by a suitable choice of deuteron optical potential. Knock-on exchange contribution is relatively small. Importance of this two-step process for ΔS = ΔT = 1 transition is discussed up to 40 MeV. (author)
Zuhr, R.A.
1995-11-01
The linear and nonlinear optical properties of nanometer dimension metal colloids embedded in a dielectric depend explicitly on the electronic structure of the metal nanoclusters. The ability to control the electronic structure of the nanoclusters may make it possible to tailor the optical properties for enhanced performance. By sequential implantation of different metal ion species multi-component nanoclusters can be formed with significantly different optical properties than single element metal nanoclusters. The authors report the formation of multi-component Sb/Ag nanoclusters in silica by sequential implantation of Sb and Ag. Samples were implanted with relative ratios of Sb to Ag of 1:1 and 3:1. A second set of samples was made by single element implantations of Ag and Sb at the same energies and doses used to make the sequentially implanted samples. All samples were characterized using RBS and both linear and nonlinear optical measurements. The presence of both ions significantly modifies the optical properties of the composites compared to the single element nanocluster glass composites. In the sequentially implanted samples the optical density is lower, and the strong surface plasmon resonance absorption observed in the Ag implanted samples is not present. At the same time the nonlinear response of the these samples is larger than for the samples implanted with Sb alone, suggesting that the addition of Ag can increase the nonlinear response of the Sb particles formed. The results are consistent with the formation of multi-component Sb/Ag colloids
Du, Yigang
.3% relative to the measurement from a 1 inch diameter transducer. A preliminary study for harmonic imaging using synthetic aperture sequential beamforming (SASB) has been demonstrated. A wire phantom underwater measurement is made by an experimental synthetic aperture real-time ultrasound scanner (SARUS......) with a linear array transducer. The second harmonic imaging is obtained by a pulse inversion technique. The received data is beamformed by the SASB using a Beamformation Toolbox. In the measurements the lateral resolution at -6 dB is improved by 66% compared to the conventional imaging algorithm. There is also...... a 35% improvement for the lateral resolution at -6 dB compared with the sole harmonic imaging and a 46% improvement compared with merely using the SASB....
Interpretability degrees of finitely axiomatized sequential theories
Visser, Albert
In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory-like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB-have suprema. This partially answers a question posed
Interpretability Degrees of Finitely Axiomatized Sequential Theories
Visser, Albert
2012-01-01
In this paper we show that the degrees of interpretability of finitely axiomatized extensions-in-the-same-language of a finitely axiomatized sequential theory —like Elementary Arithmetic EA, IΣ1, or the Gödel-Bernays theory of sets and classes GB— have suprema. This partially answers a question
S.M.P. SEQUENTIAL MATHEMATICS PROGRAM.
CICIARELLI, V; LEONARD, JOSEPH
A SEQUENTIAL MATHEMATICS PROGRAM BEGINNING WITH THE BASIC FUNDAMENTALS ON THE FOURTH GRADE LEVEL IS PRESENTED. INCLUDED ARE AN UNDERSTANDING OF OUR NUMBER SYSTEM, AND THE BASIC OPERATIONS OF WORKING WITH WHOLE NUMBERS--ADDITION, SUBTRACTION, MULTIPLICATION, AND DIVISION. COMMON FRACTIONS ARE TAUGHT IN THE FIFTH, SIXTH, AND SEVENTH GRADES. A…
Sequential and Simultaneous Logit: A Nested Model.
van Ophem, J.C.M.; Schram, A.J.H.C.
1997-01-01
A nested model is presented which has both the sequential and the multinomial logit model as special cases. This model provides a simple test to investigate the validity of these specifications. Some theoretical properties of the model are discussed. In the analysis a distribution function is
Sequential models for coarsening and missingness
Gill, R.D.; Robins, J.M.
1997-01-01
In a companion paper we described what intuitively would seem to be the most general possible way to generate Coarsening at Random mechanisms a sequential procedure called randomized monotone coarsening Counterexamples showed that CAR mechanisms exist which cannot be represented in this way Here we
Sequential motor skill: cognition, perception and action
Ruitenberg, M.F.L.
2013-01-01
Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role
Sequential decoders for large MIMO systems
Ali, Konpal S.; Abediseid, Walid; Alouini, Mohamed-Slim
2014-01-01
the Sequential Decoder using the Fano Algorithm for large MIMO systems. A parameter called the bias is varied to attain different performance-complexity trade-offs. Low values of the bias result in excellent performance but at the expense of high complexity
A framework for sequential multiblock component methods
Smilde, A.K.; Westerhuis, J.A.; Jong, S.de
2003-01-01
Multiblock or multiset methods are starting to be used in chemistry and biology to study complex data sets. In chemometrics, sequential multiblock methods are popular; that is, methods that calculate one component at a time and use deflation for finding the next component. In this paper a framework
STABILIZED SEQUENTIAL QUADRATIC PROGRAMMING: A SURVEY
Damián Fernández
2014-12-01
Full Text Available We review the motivation for, the current state-of-the-art in convergence results, and some open questions concerning the stabilized version of the sequential quadratic programming algorithm for constrained optimization. We also discuss the tools required for its local convergence analysis, globalization challenges, and extentions of the method to the more general variational problems.
Truly costly sequential search and oligopolistic pricing
Janssen, Maarten C W; Moraga-González, José Luis; Wildenbeest, Matthijs R.
We modify the paper of Stahl (1989) [Stahl, D.O., 1989. Oligopolistic pricing with sequential consumer search. American Economic Review 79, 700-12] by relaxing the assumption that consumers obtain the first price quotation for free. When all price quotations are costly to obtain, the unique
Zips : mining compressing sequential patterns in streams
Hoang, T.L.; Calders, T.G.K.; Yang, J.; Mörchen, F.; Fradkin, D.; Chau, D.H.; Vreeken, J.; Leeuwen, van M.; Faloutsos, C.
2013-01-01
We propose a streaming algorithm, based on the minimal description length (MDL) principle, for extracting non-redundant sequential patterns. For static databases, the MDL-based approach that selects patterns based on their capacity to compress data rather than their frequency, was shown to be
How to Read the Tractatus Sequentially
Tim Kraft
2016-11-01
Full Text Available One of the unconventional features of Wittgenstein’s Tractatus Logico-Philosophicus is its use of an elaborated and detailed numbering system. Recently, Bazzocchi, Hacker und Kuusela have argued that the numbering system means that the Tractatus must be read and interpreted not as a sequentially ordered book, but as a text with a two-dimensional, tree-like structure. Apart from being able to explain how the Tractatus was composed, the tree reading allegedly solves exegetical issues both on the local (e. g. how 4.02 fits into the series of remarks surrounding it and the global level (e. g. relation between ontology and picture theory, solipsism and the eye analogy, resolute and irresolute readings. This paper defends the sequential reading against the tree reading. After presenting the challenges generated by the numbering system and the two accounts as attempts to solve them, it is argued that Wittgenstein’s own explanation of the numbering system, anaphoric references within the Tractatus and the exegetical issues mentioned above do not favour the tree reading, but a version of the sequential reading. This reading maintains that the remarks of the Tractatus form a sequential chain: The role of the numbers is to indicate how remarks on different levels are interconnected to form a concise, surveyable and unified whole.
Adult Word Recognition and Visual Sequential Memory
Holmes, V. M.
2012-01-01
Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…
Terminating Sequential Delphi Survey Data Collection
Kalaian, Sema A.; Kasim, Rafa M.
2012-01-01
The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…
Ivonne Burguet Lago
2018-05-01
Full Text Available ABSTRACT The paper describes a proposal of professional pedagogical performance tests to assess teachers’ role in the process of developing the skill of working with algorithms in Linear Algebra. It aims at devising a testing tool to assess teachers’ performance in the skill-developing process. This tool is a finding of Cuba theory of Advanced Education, systematically used in recent years. The findings include the test design and the illustration of its use in a sample of 22 Linear Algebra teachers during the first term of the 2017-2018 academic year at Informatics Sciences Engineering major. Keywords: ABSTRACT The paper describes a proposal of professional pedagogical performance tests to assess teachers’ role in the process of developing the skill of working with algorithms in Linear Algebra. It aims at devising a testing tool to assess teachers’ performance in the skill-developing process. This tool is a finding of Cuba theory of Advanced Education, systematically used in recent years. The findings include the test design and the illustration of its use in a sample of 22 Linear Algebra teachers during the first term of the 2017-2018 academic year at Informatics Sciences Engineering major.
Synthetic Aperture Sequential Beamforming implemented on multi-core platforms
Kjeldsen, Thomas; Lassen, Lee; Hemmsen, Martin Christian
2014-01-01
This paper compares several computational ap- proaches to Synthetic Aperture Sequential Beamforming (SASB) targeting consumer level parallel processors such as multi-core CPUs and GPUs. The proposed implementations demonstrate that ultrasound imaging using SASB can be executed in real- time with ...... per second) on an Intel Core i7 2600 CPU with an AMD HD7850 and a NVIDIA GTX680 GPU. The fastest CPU and GPU implementations use 14% and 1.3% of the real-time budget of 62 ms/frame, respectively. The maximum achieved processing rate is 1265 frames/s....
MUF residuals tested by a sequential test with power one
Sellinschegg, D.; Bicking, U.
1983-01-01
Near-real-time material accountancy is an ongoing safeguards development to extend the current capability of IAEA safeguards. The evaluation of the observed ''Material Unaccounted For'' (MUF) time series is an important part in a near-real-time material accountancy regime. The maximum capability of a sequential data evaluation procedure is demonstrated by applying this procedure to the material balance area of the chemical separation process of a reference reprocessing facility with a throughput of 1000 tonnes heavy metal per year, as an example. It is shown that, compared to a conventional material accountancy approach, the detection time as well as the detection probability is significantly improved. (author)
The target-to-foils shift in simultaneous and sequential lineups.
Clark, Steven E; Davey, Sherrie L
2005-04-01
A theoretical cornerstone in eyewitness identification research is the proposition that witnesses, in making decisions from standard simultaneous lineups, make relative judgments. The present research considers two sources of support for this proposal. An experiment by G. L. Wells (1993) showed that if the target is removed from a lineup, witnesses shift their responses to pick foils, rather than rejecting the lineups, a result we will term a target-to-foils shift. Additional empirical support is provided by results from sequential lineups which typically show higher accuracy than simultaneous lineups, presumably because of a decrease in the use of relative judgments in making identification decisions. The combination of these two lines of research suggests that the target-to-foils shift should be reduced in sequential lineups relative to simultaneous lineups. Results of two experiments showed an overall advantage for sequential lineups, but also showed a target-to-foils shift equal in size for simultaneous and sequential lineups. Additional analyses indicated that the target-to-foils shift in sequential lineups was moderated in part by an order effect and was produced with (Experiment 2) or without (Experiment 1) a shift in decision criterion. This complex pattern of results suggests that more work is needed to understand the processes which underlie decisions in simultaneous and sequential lineups.
Involving young people in decision making about sequential cochlear implantation.
Ion, Rebecca; Cropper, Jenny; Walters, Hazel
2013-11-01
The National Institute for Health and Clinical Excellence guidelines recommended young people who currently have one cochlear implant be offered assessment for a second, sequential implant, due to the reported improvements in sound localization and speech perception in noise. The possibility and benefits of group information and counselling assessments were considered. Previous research has shown advantages of group sessions involving young people and their families and such groups which also allow young people opportunity to discuss their concerns separately to their parents/guardians are found to be 'hugely important'. Such research highlights the importance of involving children in decision-making processes. Families considering a sequential cochlear implant were invited to a group information/counselling session, which included time for parents and children to meet separately. Fourteen groups were held with approximately four to five families in each session, totalling 62 patients. The sessions were facilitated by the multi-disciplinary team, with a particular psychological focus in the young people's session. Feedback from families has demonstrated positive support for this format. Questionnaire feedback, to which nine families responded, indicated that seven preferred the group session to an individual session and all approved of separate groups for the child and parents/guardians. Overall the group format and psychological focus were well received in this typically surgical setting and emphasized the importance of involving the young person in the decision-making process. This positive feedback also opens up the opportunity to use a group format in other assessment processes.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
Robert B. Gramacy
2007-06-01
Full Text Available The tgp package for R is a tool for fully Bayesian nonstationary, semiparametric nonlinear regression and design by treed Gaussian processes with jumps to the limiting linear model. Special cases also implemented include Bayesian linear models, linear CART, stationary separable and isotropic Gaussian processes. In addition to inference and posterior prediction, the package supports the (sequential design of experiments under these models paired with several objective criteria. 1-d and 2-d plotting, with higher dimension projection and slice capabilities, and tree drawing functions (requiring maptree and combinat packages, are also provided for visualization of tgp objects.
A Bayesian Optimal Design for Sequential Accelerated Degradation Testing
Xiaoyang Li
2017-07-01
Full Text Available When optimizing an accelerated degradation testing (ADT plan, the initial values of unknown model parameters must be pre-specified. However, it is usually difficult to obtain the exact values, since many uncertainties are embedded in these parameters. Bayesian ADT optimal design was presented to address this problem by using prior distributions to capture these uncertainties. Nevertheless, when the difference between a prior distribution and actual situation is large, the existing Bayesian optimal design might cause some over-testing or under-testing issues. For example, the implemented ADT following the optimal ADT plan consumes too much testing resources or few accelerated degradation data are obtained during the ADT. To overcome these obstacles, a Bayesian sequential step-down-stress ADT design is proposed in this article. During the sequential ADT, the test under the highest stress level is firstly conducted based on the initial prior information to quickly generate degradation data. Then, the data collected under higher stress levels are employed to construct the prior distributions for the test design under lower stress levels by using the Bayesian inference. In the process of optimization, the inverse Gaussian (IG process is assumed to describe the degradation paths, and the Bayesian D-optimality is selected as the optimal objective. A case study on an electrical connector’s ADT plan is provided to illustrate the application of the proposed Bayesian sequential ADT design method. Compared with the results from a typical static Bayesian ADT plan, the proposed design could guarantee more stable and precise estimations of different reliability measures.
LINEAR2007, Linear-Linear Interpolation of ENDF Format Cross-Sections
2007-01-01
1 - Description of program or function: LINEAR converts evaluated cross sections in the ENDF/B format into a tabular form that is subject to linear-linear interpolation in energy and cross section. The code also thins tables of cross sections already in that form. Codes used subsequently need thus to consider only linear-linear data. IAEA1311/15: This version include the updates up to January 30, 2007. Changes in ENDF/B-VII Format and procedures, as well as the evaluations themselves, make it impossible for versions of the ENDF/B pre-processing codes earlier than PREPRO 2007 (2007 Version) to accurately process current ENDF/B-VII evaluations. The present code can handle all existing ENDF/B-VI evaluations through release 8, which will be the last release of ENDF/B-VI. Modifications from previous versions: - Linear VERS. 2007-1 (JAN. 2007): checked against all ENDF/B-VII; increased page size from 60,000 to 600,000 points 2 - Method of solution: Each section of data is considered separately. Each section of File 3, 23, and 27 data consists of a table of cross section versus energy with any of five interpolation laws. LINEAR will replace each section with a new table of energy versus cross section data in which the interpolation law is always linear in energy and cross section. The histogram (constant cross section between two energies) interpolation law is converted to linear-linear by substituting two points for each initial point. The linear-linear is not altered. For the log-linear, linear-log and log- log laws, the cross section data are converted to linear by an interval halving algorithm. Each interval is divided in half until the value at the middle of the interval can be approximated by linear-linear interpolation to within a given accuracy. The LINEAR program uses a multipoint fractional error thinning algorithm to minimize the size of each cross section table
A Cross-Cultural Validation of the Sequential-Simultaneous Theory of Intelligence in Children.
Moon, Soo-Back; McLean, James E.; Kaufman, Alan S.
2003-01-01
The Kaufman Assessment Battery for Children - Korean (K-ABC-K) was developed to assess the intelligence and achievement of preschool and school-aged Korean children. This study examined the validity of the Sequential Processing, Simultaneous Processing and Achievement scales of the K-ABC-K. The factor analyses provided strong support for the…