WorldWideScience

Sample records for maximize system throughput

  1. Throughput maximization of parcel sorter systems by scheduling inbound containers

    NARCIS (Netherlands)

    Haneyah, S.W.A.; Schutten, Johannes M.J.; Fikse, K.; Clausen, Uwe; ten Hompel, Michael; Meier, J. Fabian

    2013-01-01

    This paper addresses the inbound container scheduling problem for automated sorter systems in express parcel sorting. The purpose is to analyze which container scheduling approaches maximize the throughput of sorter systems. We build on existing literature, particularly on the dynamic load balancing

  2. Joint Throughput Maximization and Fair Uplink Transmission Scheduling in CDMA Systems

    Directory of Open Access Journals (Sweden)

    Symeon Papavassiliou

    2009-01-01

    Full Text Available We study the fundamental problem of optimal transmission scheduling in a code-division multiple-access wireless system in order to maximize the uplink system throughput, while satisfying the users quality-of-service (QoS requirements and maintaining fairness among them. The corresponding problem is expressed as a weighted throughput maximization problem, under certain power and QoS constraints, where the weights are the control parameters reflecting the fairness constraints. With the introduction of the power index capacity, it is shown that this optimization problem can be converted into a binary knapsack problem, where all the corresponding constraints are replaced by the power index capacities at some certain system power index. A two-step approach is followed to obtain the optimal solution. First, a simple method is proposed to find the optimal set of users to receive service for a given fixed target system load, and then the optimal solution is obtained as a global search within a certain range. Furthermore, a stochastic approximation method is presented to effectively identify the required control parameters. The performance evaluation reveals the advantages of our proposed policy over other existing ones and confirms that it achieves very high throughput while maintains fairness among the users, under different channel conditions and requirements.

  3. Scheduling Algorithms for Maximizing Throughput with Zero-Forcing Beamforming in a MIMO Wireless System

    Science.gov (United States)

    Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi

    Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.

  4. Aspects of multiuser MIMO for cell throughput maximization

    DEFF Research Database (Denmark)

    Bauch, Gerhard; Tejera, Pedro; Guthy, Christian

    2007-01-01

    We consider a multiuser MIMO downlink scenario where the resources in time, frequency and space are allocated such that the total cell throughput is maximized. This is achieved by exploiting multiuser diversity, i.e. the physical resources are allocated to the user with the highest SNR. We assume...

  5. On Throughput Maximization in Constant Travel-Time Robotic Cells

    OpenAIRE

    Milind Dawande; Chelliah Sriskandarajah; Suresh Sethi

    2002-01-01

    We consider the problem of scheduling operations in bufferless robotic cells that produce identical parts. The objective is to find a cyclic sequence of robot moves that minimizes the long-run average time to produce a part or, equivalently, maximizes the throughput rate. The robot can be moved in simple cycles that produce one unit or, in more complicated cycles, that produce multiple units. Because one-unit cycles are the easiest to understand, implement, and control, they are widely used i...

  6. On Maximizing the Throughput of Packet Transmission under Energy Constraints.

    Science.gov (United States)

    Wu, Weiwei; Dai, Guangli; Li, Yan; Shan, Feng

    2018-06-23

    More and more Internet of Things (IoT) wireless devices have been providing ubiquitous services over the recent years. Since most of these devices are powered by batteries, a fundamental trade-off to be addressed is the depleted energy and the achieved data throughput in wireless data transmission. By exploiting the rate-adaptive capacities of wireless devices, most existing works on energy-efficient data transmission try to design rate-adaptive transmission policies to maximize the amount of transmitted data bits under the energy constraints of devices. Such solutions, however, cannot apply to scenarios where data packets have respective deadlines and only integrally transmitted data packets contribute. Thus, this paper introduces a notion of weighted throughput, which measures how much total value of data packets are successfully and integrally transmitted before their own deadlines. By designing efficient rate-adaptive transmission policies, this paper aims to make the best use of the energy and maximize the weighted throughput. What is more challenging but with practical significance, we consider the fading effect of wireless channels in both offline and online scenarios. In the offline scenario, we develop an optimal algorithm that computes the optimal solution in pseudo-polynomial time, which is the best possible solution as the problem undertaken is NP-hard. In the online scenario, we propose an efficient heuristic algorithm based on optimal properties derived for the optimal offline solution. Simulation results validate the efficiency of the proposed algorithm.

  7. Maximizing gain in high-throughput screening using conformal prediction.

    Science.gov (United States)

    Svensson, Fredrik; Afzal, Avid M; Norinder, Ulf; Bender, Andreas

    2018-02-21

    Iterative screening has emerged as a promising approach to increase the efficiency of screening campaigns compared to traditional high throughput approaches. By learning from a subset of the compound library, inferences on what compounds to screen next can be made by predictive models, resulting in more efficient screening. One way to evaluate screening is to consider the cost of screening compared to the gain associated with finding an active compound. In this work, we introduce a conformal predictor coupled with a gain-cost function with the aim to maximise gain in iterative screening. Using this setup we were able to show that by evaluating the predictions on the training data, very accurate predictions on what settings will produce the highest gain on the test data can be made. We evaluate the approach on 12 bioactivity datasets from PubChem training the models using 20% of the data. Depending on the settings of the gain-cost function, the settings generating the maximum gain were accurately identified in 8-10 out of the 12 datasets. Broadly, our approach can predict what strategy generates the highest gain based on the results of the cost-gain evaluation: to screen the compounds predicted to be active, to screen all the remaining data, or not to screen any additional compounds. When the algorithm indicates that the predicted active compounds should be screened, our approach also indicates what confidence level to apply in order to maximize gain. Hence, our approach facilitates decision-making and allocation of the resources where they deliver the most value by indicating in advance the likely outcome of a screening campaign.

  8. Hazardous waste incinerators under waste uncertainty: balancing and throughput maximization via heat recuperation.

    Science.gov (United States)

    Tsiliyannis, Christos Aristeides

    2013-09-01

    Hazardous waste incinerators (HWIs) differ substantially from thermal power facilities, since instead of maximizing energy production with the minimum amount of fuel, they aim at maximizing throughput. Variations in quantity or composition of received waste loads may significantly diminish HWI throughput (the decisive profit factor), from its nominal design value. A novel formulation of combustion balance is presented, based on linear operators, which isolates the wastefeed vector from the invariant combustion stoichiometry kernel. Explicit expressions for the throughput are obtained, in terms of incinerator temperature, fluegas heat recuperation ratio and design parameters, for an arbitrary number of wastes, based on fundamental principles (mass and enthalpy balances). The impact of waste variations, of recuperation ratio and of furnace temperature is explicitly determined. It is shown that in the presence of waste uncertainty, the throughput may be a decreasing or increasing function of incinerator temperature and recuperation ratio, depending on the sign of a dimensionless parameter related only to the uncertain wastes. The dimensionless parameter is proposed as a sharp a' priori waste 'fingerprint', determining the necessary increase or decrease of manipulated variables (recuperation ratio, excess air, auxiliary fuel feed rate, auxiliary air flow) in order to balance the HWI and maximize throughput under uncertainty in received wastes. A 10-step procedure is proposed for direct application subject to process capacity constraints. The results may be useful for efficient HWI operation and for preparing hazardous waste blends. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Maximization Network Throughput Based on Improved Genetic Algorithm and Network Coding for Optical Multicast Networks

    Science.gov (United States)

    Wei, Chengying; Xiong, Cuilian; Liu, Huanlin

    2017-12-01

    Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.

  10. Throughput maximization for buffer-aided hybrid half-/full-duplex relaying with self-interference

    KAUST Repository

    Khafagy, Mohammad Galal

    2015-06-01

    In this work, we consider a two-hop cooperative setting where a source communicates with a destination through an intermediate relay node with a buffer. Unlike the existing body of work on buffer-aided half-duplex relaying, we consider a hybrid half-/full-duplex relaying scenario with loopback interference in the full-duplex mode. Depending on the channel outage and buffer states that are assumed available at the transmitters, the source and relay may either transmit simultaneously or revert to orthogonal transmission. Specifically, a joint source/relay scheduling and relaying mode selection mechanism is proposed to maximize the end-to-end throughput. The throughput maximization problem is converted to a linear program where the exact global optimal solution is efficiently obtained via standard convex/linear numerical optimization tools. Finally, the theoretical findings are corroborated with event-based simulations to provide the necessary performance validation.

  11. Throughput Maximization for Cognitive Radio Networks Using Active Cooperation and Superposition Coding

    KAUST Repository

    Hamza, Doha R.

    2015-02-13

    We propose a three-message superposition coding scheme in a cognitive radio relay network exploiting active cooperation between primary and secondary users. The primary user is motivated to cooperate by substantial benefits it can reap from this access scenario. Specifically, the time resource is split into three transmission phases: The first two phases are dedicated to primary communication, while the third phase is for the secondary’s transmission. We formulate two throughput maximization problems for the secondary network subject to primary user rate constraints and per-node power constraints with respect to the time durations of primary transmission and the transmit power of the primary and the secondary users. The first throughput maximization problem assumes a partial power constraint such that the secondary power dedicated to primary cooperation, i.e. for the first two communication phases, is fixed apriori. In the second throughput maximization problem, a total power constraint is assumed over the three phases of communication. The two problems are difficult to solve analytically when the relaying channel gains are strictly greater than each other and strictly greater than the direct link channel gain. However, mathematically tractable lowerbound and upperbound solutions can be attained for the two problems. For both problems, by only using the lowerbound solution, we demonstrate significant throughput gains for both the primary and the secondary users through this active cooperation scheme. We find that most of the throughput gains come from minimizing the second phase transmission time since the secondary nodes assist the primary communication during this phase. Finally, we demonstrate the superiority of our proposed scheme compared to a number of reference schemes that include best relay selection, dual-hop routing, and an interference channel model.

  12. Throughput Maximization for Cognitive Radio Networks Using Active Cooperation and Superposition Coding

    KAUST Repository

    Hamza, Doha R.; Park, Kihong; Alouini, Mohamed-Slim; Aissa, Sonia

    2015-01-01

    We propose a three-message superposition coding scheme in a cognitive radio relay network exploiting active cooperation between primary and secondary users. The primary user is motivated to cooperate by substantial benefits it can reap from this access scenario. Specifically, the time resource is split into three transmission phases: The first two phases are dedicated to primary communication, while the third phase is for the secondary’s transmission. We formulate two throughput maximization problems for the secondary network subject to primary user rate constraints and per-node power constraints with respect to the time durations of primary transmission and the transmit power of the primary and the secondary users. The first throughput maximization problem assumes a partial power constraint such that the secondary power dedicated to primary cooperation, i.e. for the first two communication phases, is fixed apriori. In the second throughput maximization problem, a total power constraint is assumed over the three phases of communication. The two problems are difficult to solve analytically when the relaying channel gains are strictly greater than each other and strictly greater than the direct link channel gain. However, mathematically tractable lowerbound and upperbound solutions can be attained for the two problems. For both problems, by only using the lowerbound solution, we demonstrate significant throughput gains for both the primary and the secondary users through this active cooperation scheme. We find that most of the throughput gains come from minimizing the second phase transmission time since the secondary nodes assist the primary communication during this phase. Finally, we demonstrate the superiority of our proposed scheme compared to a number of reference schemes that include best relay selection, dual-hop routing, and an interference channel model.

  13. Generalized Encoding CRDSA: Maximizing Throughput in Enhanced Random Access Schemes for Satellite

    Directory of Open Access Journals (Sweden)

    Manlio Bacco

    2014-12-01

    Full Text Available This work starts from the analysis of the literature about the Random Access protocols with contention resolution, such as Contention Resolution Diversity Slotted Aloha (CRDSA, and introduces a possible enhancement, named Generalized Encoding Contention Resolution Diversity Slotted Aloha (GE-CRDSA. The GE-CRDSA aims at improving the aggregated throughput when the system load is less than 50%, playing on the opportunity of transmitting an optimal combination of information and parity packets frame by frame. This paper shows the improvement in terms of throughput, by performing traffic estimation and adaptive choice of information and parity rates, when a satellite network undergoes a variable traffic load profile.

  14. Utility Maximization in Nonconvex Wireless Systems

    CERN Document Server

    Brehmer, Johannes

    2012-01-01

    This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.

  15. Maximizing Resource Utilization in Video Streaming Systems

    Science.gov (United States)

    Alsmirat, Mohammad Abdullah

    2013-01-01

    Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…

  16. Optimizing the Energy and Throughput of a Water-Quality Monitoring System.

    Science.gov (United States)

    Olatinwo, Segun O; Joubert, Trudi-H

    2018-04-13

    This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN), with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT) method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near-far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.

  17. Optimizing the Energy and Throughput of a Water-Quality Monitoring System

    Directory of Open Access Journals (Sweden)

    Segun O. Olatinwo

    2018-04-01

    Full Text Available This work presents a new approach to the maximization of energy and throughput in a wireless sensor network (WSN, with the intention of applying the approach to water-quality monitoring. Water-quality monitoring using WSN technology has become an interesting research area. Energy scarcity is a critical issue that plagues the widespread deployment of WSN systems. Different power supplies, harvesting energy from sustainable sources, have been explored. However, when energy-efficient models are not put in place, energy harvesting based WSN systems may experience an unstable energy supply, resulting in an interruption in communication, and low system throughput. To alleviate these problems, this paper presents the joint maximization of the energy harvested by sensor nodes and their information-transmission rate using a sum-throughput technique. A wireless information and power transfer (WIPT method is considered by harvesting energy from dedicated radio frequency sources. Due to the doubly near–far condition that confronts WIPT systems, a new WIPT system is proposed to improve the fairness of resource utilization in the network. Numerical simulation results are presented to validate the mathematical formulations for the optimization problem, which maximize the energy harvested and the overall throughput rate. Defining the performance metrics of achievable throughput and fairness in resource sharing, the proposed WIPT system outperforms an existing state-of-the-art WIPT system, with the comparison based on numerical simulations of both systems. The improved energy efficiency of the proposed WIPT system contributes to addressing the problem of energy scarcity.

  18. Bidirectional User Throughput Maximization Based on Feedback Reduction in LiFi Networks

    OpenAIRE

    Soltani, Mohammad Dehghani; Wu, Xiping; Safari, Majid; Haas, Harald

    2017-01-01

    Channel adaptive signalling, which is based on feedback, can result in almost any performance metric enhancement. Unlike the radio frequency (RF) channel, the optical wireless communications (OWCs) channel is fairly static. This feature enables a potential improvement of the bidirectional user throughput by reducing the amount of feedback. Light-Fidelity (LiFi) is a subset of OWCs, and it is a bidirectional, high-speed and fully networked wireless communication technology where visible light ...

  19. MAXIMIZING THE BENEFITS OF ERP SYSTEMS

    Directory of Open Access Journals (Sweden)

    Paulo André da Conceição Menezes

    2010-04-01

    Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.

  20. Maximizing the benefits of a dewatering system

    International Nuclear Information System (INIS)

    Matthews, P.; Iverson, T.S.

    1999-01-01

    The use of dewatering systems in the mining, industrial sludge and sewage waste treatment industries is discussed, also describing some of the problems that have been encountered while using drilling fluid dewatering technology. The technology is an acceptable drilling waste handling alternative but it has had problems associated with recycled fluid incompatibility, high chemical costs and system inefficiencies. This paper discussed the following five action areas that can maximize the benefits and help reduce costs of a dewatering project: (1) co-ordinate all services, (2) choose equipment that fits the drilling program, (3) match the chemical treatment with the drilling fluid types, (4) determine recycled fluid compatibility requirements, and (5) determine the disposal requirements before project start-up. 2 refs., 5 figs

  1. Throughput Maximization Using an SVM for Multi-Class Hypothesis-Based Spectrum Sensing in Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Sana Ullah Jan

    2018-03-01

    Full Text Available A framework of spectrum sensing with a multi-class hypothesis is proposed to maximize the achievable throughput in cognitive radio networks. The energy range of a sensing signal under the hypothesis that the primary user is absent (in a conventional two-class hypothesis is further divided into quantized regions, whereas the hypothesis that the primary user is present is conserved. The non-radio frequency energy harvesting-equiped secondary user transmits, when the primary user is absent, with transmission power based on the hypothesis result (the energy level of the sensed signal and the residual energy in the battery: the lower the energy of the received signal, the higher the transmission power, and vice versa. Conversely, the lower is the residual energy in the node, the lower is the transmission power. This technique increases the throughput of a secondary link by providing a higher number of transmission events, compared to the conventional two-class hypothesis. Furthermore, transmission with low power for higher energy levels in the sensed signal reduces the probability of interference with primary users if, for instance, detection was missed. The familiar machine learning algorithm known as a support vector machine (SVM is used in a one-versus-rest approach to classify the input signal into predefined classes. The input signal to the SVM is composed of three statistical features extracted from the sensed signal and a number ranging from 0 to 100 representing the percentage of residual energy in the node’s battery. To increase the generalization of the classifier, k-fold cross-validation is utilized in the training phase. The experimental results show that an SVM with the given features performs satisfactorily for all kernels, but an SVM with a polynomial kernel outperforms linear and radial-basis function kernels in terms of accuracy. Furthermore, the proposed multi-class hypothesis achieves higher throughput compared to the

  2. Violating Bell inequalities maximally for two d-dimensional systems

    International Nuclear Information System (INIS)

    Chen Jingling; Wu Chunfeng; Oh, C. H.; Kwek, L. C.; Ge Molin

    2006-01-01

    We show the maximal violation of Bell inequalities for two d-dimensional systems by using the method of the Bell operator. The maximal violation corresponds to the maximal eigenvalue of the Bell operator matrix. The eigenvectors corresponding to these eigenvalues are described by asymmetric entangled states. We estimate the maximum value of the eigenvalue for large dimension. A family of elegant entangled states |Ψ> app that violate Bell inequality more strongly than the maximally entangled state but are somewhat close to these eigenvectors is presented. These approximate states can potentially be useful for quantum cryptography as well as many other important fields of quantum information

  3. Maximizing Your Investment in Building Automation System Technology.

    Science.gov (United States)

    Darnell, Charles

    2001-01-01

    Discusses how organizational issues and system standardization can be important factors that determine an institution's ability to fully exploit contemporary building automation systems (BAS). Further presented is management strategy for maximizing BAS investments. (GR)

  4. Use and optimization of a dual-flowrate loading strategy to maximize throughput in protein-a affinity chromatography.

    Science.gov (United States)

    Ghose, Sanchayita; Nagrath, Deepak; Hubbard, Brian; Brooks, Clayton; Cramer, Steven M

    2004-01-01

    The effect of an alternate strategy employing two different flowrates during loading was explored as a means of increasing system productivity in Protein-A chromatography. The effect of such a loading strategy was evaluated using a chromatographic model that was able to accurately predict experimental breakthrough curves for this Protein-A system. A gradient-based optimization routine is carried out to establish the optimal loading conditions (initial and final flowrates and switching time). The two-step loading strategy (using a higher flowrate during the initial stages followed by a lower flowrate) was evaluated for an Fc-fusion protein and was found to result in significant improvements in process throughput. In an extension of this optimization routine, dynamic loading capacity and productivity were simultaneously optimized using a weighted objective function, and this result was compared to that obtained with the single flowrate. Again, the dual-flowrate strategy was found to be superior.

  5. Maximal imaginery eigenvalues in optimal systems

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    1991-07-01

    Full Text Available In this note we present equations that uniquely determine the maximum possible imaginary value of the closed loop eigenvalues in an LQ-optimal system, irrespective of how the state weight matrix is chosen, provided a real symmetric solution of the algebraic Riccati equation exists. In addition, the corresponding state weight matrix and the solution to the algebraic Riccati equation are derived for a class of linear systems. A fundamental lemma for the existence of a real symmetric solution to the algebraic Riccati equation is derived for this class of linear systems.

  6. MAXIMIZING THE BENEFITS OF ERP SYSTEMS

    OpenAIRE

    Paulo André da Conceição Menezes; Fernando González-Ladrón-de-Guevara

    2010-01-01

    The ERP (Enterprise Resource Planning) systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as th...

  7. Throughput of a MIMO OFDM based WLAN system

    NARCIS (Netherlands)

    Schenk, T.C.W.; Dolmans, G.; Modonesi, I.

    2004-01-01

    In this paper, the system throughput of a wireless local-area-network (WLAN) based on multiple-input multipleoutput orthogonal frequency division multiplexing (MIMO OFDM) is studied. A broadband channel model is derived from indoor channel measurements. This model is used in simulations to evaluate

  8. High throughput nanostructure-initiator mass spectrometry screening of microbial growth conditions for maximal β-glucosidase production.

    Science.gov (United States)

    Cheng, Xiaoliang; Hiras, Jennifer; Deng, Kai; Bowen, Benjamin; Simmons, Blake A; Adams, Paul D; Singer, Steven W; Northen, Trent R

    2013-01-01

    Production of biofuels via enzymatic hydrolysis of complex plant polysaccharides is a subject of intense global interest. Microbial communities are known to express a wide range of enzymes necessary for the saccharification of lignocellulosic feedstocks and serve as a powerful reservoir for enzyme discovery. However, the growth temperature and conditions that yield high cellulase activity vary widely, and the throughput to identify optimal conditions has been limited by the slow handling and conventional analysis. A rapid method that uses small volumes of isolate culture to resolve specific enzyme activity is needed. In this work, a high throughput nanostructure-initiator mass spectrometry (NIMS)-based approach was developed for screening a thermophilic cellulolytic actinomycete, Thermobispora bispora, for β-glucosidase production under various growth conditions. Media that produced high β-glucosidase activity were found to be I/S + glucose or microcrystalline cellulose (MCC), Medium 84 + rolled oats, and M9TE + MCC at 45°C. Supernatants of cell cultures grown in M9TE + 1% MCC cleaved 2.5 times more substrate at 45°C than at all other temperatures. While T. bispora is reported to grow optimally at 60°C in Medium 84 + rolled oats and M9TE + 1% MCC, approximately 40% more conversion was observed at 45°C. This high throughput NIMS approach may provide an important tool in discovery and characterization of enzymes from environmental microbes for industrial and biofuel applications.

  9. High throughput nanostructure-initiator mass spectrometry screening of microbial growth conditions for maximal β-glucosidase production

    Directory of Open Access Journals (Sweden)

    Xiaoliang eCheng

    2013-12-01

    Full Text Available Production of biofuels via enzymatic hydrolysis of complex plant polysaccharides is a subject of intense global interest. Microbial communities are known to express a wide range of enzymes necessary for the saccharification of lignocellulosic feedstocks and serve as a powerful reservoir for enzyme discovery. However, the growth temperature and conditions that yield high cellulase activity vary widely, and the throughput to identify optimal conditions has been limited by the slow handling and conventional analysis. A rapid method that uses small volumes of isolate culture to resolve specific enzyme activity is needed. In this work, a high throughput nanostructure-initiator mass spectrometry (NIMS based approach was developed for screening a thermophilic cellulolytic actinomycete, Thermobispora bispora, for β-glucosidase production under various growth conditions. Media that produced high β-glucosidase activity were found to be I/S + glucose or microcrystalline cellulose (MCC, Medium 84 + rolled oats, and M9TE + MCC at 45 °C. Supernatants of cell cultures grown in M9TE + 1% MCC cleaved 2.5 times more substrate at 45 °C than at all other temperatures. While T. bispora is reported to grow optimally at 60 °C in Medium 84 + rolled oats and M9TE + 1% MCC, approximately 40% more conversion was observed at 45 °C. This high throughput NIMS approach may provide an important tool in discovery and characterization of enzymes from environmental microbes for industrial and biofuel applications.

  10. Development of automated analytical systems for large throughput

    International Nuclear Information System (INIS)

    Ernst, P.C.; Hoffman, E.L.

    1982-01-01

    The need to be able to handle a large throughput of samples for neutron activation analysis has led to the development of automated counting and sample handling systems. These are coupled with available computer-assisted INAA techniques to perform a wide range of analytical services on a commercial basis. A fully automated delayed neutron counting system and a computer controlled pneumatic transfer for INAA use are described, as is a multi-detector gamma-spectroscopy system. (author)

  11. High throughput solar cell ablation system

    Science.gov (United States)

    Harley, Gabriel; Pass, Thomas; Cousins, Peter John; Viatella, John

    2012-09-11

    A solar cell is formed using a solar cell ablation system. The ablation system includes a single laser source and several laser scanners. The laser scanners include a master laser scanner, with the rest of the laser scanners being slaved to the master laser scanner. A laser beam from the laser source is split into several laser beams, with the laser beams being scanned onto corresponding wafers using the laser scanners in accordance with one or more patterns. The laser beams may be scanned on the wafers using the same or different power levels of the laser source.

  12. Design Concept Evaluation Using System Throughput Model

    International Nuclear Information System (INIS)

    Sequeira, G.; Nutt, W. M.

    2004-01-01

    The U.S. Department of Energy (DOE) Office of Civilian Radioactive Waste Management (OCRWM) is currently developing the technical bases to support the submittal of a license application for construction of a geologic repository at Yucca Mountain, Nevada to the U.S. Nuclear Regulatory Commission. The Office of Repository Development (ORD) is responsible for developing the design of the proposed repository surface facilities for the handling of spent nuclear fuel and high level nuclear waste. Preliminary design activities are underway to sufficiently develop the repository surface facilities design for inclusion in the license application. The design continues to evolve to meet mission needs and to satisfy both regulatory and program requirements. A system engineering approach is being used in the design process since the proposed repository facilities are dynamically linked by a series of sub-systems and complex operations. In addition, the proposed repository facility is a major system element of the overall waste management process being developed by the OCRWM. Such an approach includes iterative probabilistic dynamic simulation as an integral part of the design evolution process. A dynamic simulation tool helps to determine if: (1) the mission and design requirements are complete, robust, and well integrated; (2) the design solutions under development meet the design requirements and mission goals; (3) opportunities exist where the system can be improved and/or optimized; and (4) proposed changes to the mission, and design requirements have a positive or negative impact on overall system performance and if design changes may be necessary to satisfy these changes. This paper will discuss the type of simulation employed to model the waste handling operations. It will then discuss the process being used to develop the Yucca Mountain surface facilities model. The latest simulation model and the results of the simulation and how the data were used in the design

  13. High-throughput bioinformatics with the Cyrille2 pipeline system

    Directory of Open Access Journals (Sweden)

    de Groot Joost CW

    2008-02-01

    Full Text Available Abstract Background Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses are often interdependent and chained together to form complex workflows or pipelines. Given the volume of the data used and the multitude of computational resources available, specialized pipeline software is required to make high-throughput analysis of large-scale omics datasets feasible. Results We have developed a generic pipeline system called Cyrille2. The system is modular in design and consists of three functionally distinct parts: 1 a web based, graphical user interface (GUI that enables a pipeline operator to manage the system; 2 the Scheduler, which forms the functional core of the system and which tracks what data enters the system and determines what jobs must be scheduled for execution, and; 3 the Executor, which searches for scheduled jobs and executes these on a compute cluster. Conclusion The Cyrille2 system is an extensible, modular system, implementing the stated requirements. Cyrille2 enables easy creation and execution of high throughput, flexible bioinformatics pipelines.

  14. Maximization of energy in the output of a linear system

    International Nuclear Information System (INIS)

    Dudley, D.G.

    1976-01-01

    A time-limited signal which, when passed through a linear system, maximizes the total output energy is considered. Previous work has shown that the solution is given by the eigenfunction associated with the maximum eigenvalue in a Hilbert-Schmidt integral equation. Analytical results are available for the case where the transfer function is a low-pass filter. This work is extended by obtaining a numerical solution to the integral equation which allows results for reasonably general transfer functions

  15. Synthesis of magnetic systems producing field with maximal scalar characteristics

    International Nuclear Information System (INIS)

    Klevets, Nickolay I.

    2005-01-01

    A method of synthesis of the magnetic systems (MSs) consisting of uniformly magnetized blocks is proposed. This method allows to synthesize MSs providing maximum value of any magnetic field scalar characteristic. In particular, it is possible to synthesize the MSs providing the maximum of a field projection on a given vector, a gradient of a field modulus and a gradient of a field energy on a given directing vector, a field magnitude, a magnetic flux through a given surface, a scalar product of a field or a force by a directing function given in some area of space, etc. The synthesized MSs provide maximal efficiency of permanent magnets utilization. The usage of the proposed method of MSs synthesis allows to change a procedure of projecting in principal, namely, to execute it according to the following scheme: (a) to choose the sizes, a form and a number of blocks of a system proceeding from technological (economical) reasons; (b) using the proposed synthesis method, to find an orientation of site magnetization providing maximum possible effect of magnet utilization in a system obtained in (a). Such approach considerably reduces a time of MSs projecting and guarantees maximal possible efficiency of magnets utilization. Besides it provides absolute assurance in 'ideality' of a MS design and allows to obtain an exact estimate of the limit parameters of a field in a working area of a projected MS. The method is applicable to a system containing the components from soft magnetic material with linear magnetic properties

  16. Nonimaging optics maximizing exergy for hybrid solar system

    Science.gov (United States)

    Winston, Roland; Jiang, Lun; Abdelhamid, Mahmoud; Widyolar, Bennett K.; Ferry, Jonathan; Cygan, David; Abbasi, Hamid; Kozlov, Alexandr; Kirk, Alexander; Elarde, Victor; Osowski, Mark

    2016-09-01

    The project team of University of California at Merced (UC-Merced), Gas Technology Institute (GTI) and MicroLink Devices Inc. (MicroLink) are developing a hybrid solar system using a nonimaging compound parabolic concentrator (CPC) that maximizes the exergy by delivering direct electricity and on-demand heat. The hybrid solar system technology uses secondary optics in a solar receiver to achieve high efficiency at high temperature, collects heat in particles and uses reflective liftoff cooled double junction (2J) InGaP/GaAs solar cells with backside infrared (IR) reflectors on the secondary optical element to raise exergy efficiency. The nonimaging optics provides additional concentration towards the high temperature thermal stream and enables it to operate efficiently at 650 °C while the solar cell is maintained at 40 °C to operate as efficiently as possible.

  17. Sum-Rate Maximization of Coordinated Direct and Relay Systems

    DEFF Research Database (Denmark)

    Sun, Fan; Popovski, Petar; Thai, Chan

    2012-01-01

    Joint processing of multiple communication flows in wireless systems has given rise to a number of novel transmission techniques, notably the two-way relaying based on wireless network coding. Recently, a related set of techniques has emerged, termed coordinated direct and relay (CDR) transmissions......, where the constellation of traffic flows is more general than the two-way. Regardless of the actual traffic flows, in a CDR scheme the relay has a central role in managing the interference and boosting the overall system performance. In this paper we investigate the novel transmission modes, based...... on amplify-and-forward, that arise when the relay is equipped with multiple antennas and can use beamforming. We focus on one representative traffic type, with one uplink and one downlink users and consider the achievable sum-rate maximization relay beamforming. The beamforming criterion leads to a non...

  18. High Throughput System for Plant Height and Hyperspectral Measurement

    Science.gov (United States)

    Zhao, H.; Xu, L.; Jiang, H.; Shi, S.; Chen, D.

    2018-04-01

    Hyperspectral and three-dimensional measurement can obtain the intrinsic physicochemical properties and external geometrical characteristics of objects, respectively. Currently, a variety of sensors are integrated into a system to collect spectral and morphological information in agriculture. However, previous experiments were usually performed with several commercial devices on a single platform. Inadequate registration and synchronization among instruments often resulted in mismatch between spectral and 3D information of the same target. And narrow field of view (FOV) extends the working hours in farms. Therefore, we propose a high throughput prototype that combines stereo vision and grating dispersion to simultaneously acquire hyperspectral and 3D information.

  19. HIGH THROUGHPUT SYSTEM FOR PLANT HEIGHT AND HYPERSPECTRAL MEASUREMENT

    Directory of Open Access Journals (Sweden)

    H. Zhao

    2018-04-01

    Full Text Available Hyperspectral and three-dimensional measurement can obtain the intrinsic physicochemical properties and external geometrical characteristics of objects, respectively. Currently, a variety of sensors are integrated into a system to collect spectral and morphological information in agriculture. However, previous experiments were usually performed with several commercial devices on a single platform. Inadequate registration and synchronization among instruments often resulted in mismatch between spectral and 3D information of the same target. And narrow field of view (FOV extends the working hours in farms. Therefore, we propose a high throughput prototype that combines stereo vision and grating dispersion to simultaneously acquire hyperspectral and 3D information.

  20. Mutually Unbiased Maximally Entangled Bases for the Bipartite System Cd⊗ C^{dk}

    Science.gov (United States)

    Nan, Hua; Tao, Yuan-Hong; Wang, Tian-Jiao; Zhang, Jun

    2016-10-01

    The construction of maximally entangled bases for the bipartite system Cd⊗ Cd is discussed firstly, and some mutually unbiased bases with maximally entangled bases are given, where 2≤ d≤5. Moreover, we study a systematic way of constructing mutually unbiased maximally entangled bases for the bipartite system Cd⊗ C^{dk}.

  1. On differential operators generating iterative systems of linear ODEs of maximal symmetry algebra

    Science.gov (United States)

    Ndogmo, J. C.

    2017-06-01

    Although every iterative scalar linear ordinary differential equation is of maximal symmetry algebra, the situation is different and far more complex for systems of linear ordinary differential equations, and an iterative system of linear equations need not be of maximal symmetry algebra. We illustrate these facts by examples and derive families of vector differential operators whose iterations are all linear systems of equations of maximal symmetry algebra. Some consequences of these results are also discussed.

  2. Linear Processing Design of Amplify-and-Forward Relays for Maximizing the System Throughput

    Directory of Open Access Journals (Sweden)

    Qiang Wang

    2018-01-01

    Full Text Available In this paper, firstly, we study the linear processing of amplify-and-forward (AF relays for the multiple relays multiple users scenario. We regard all relays as one special “relay”, and then the subcarrier pairing, relay selection and channel assignment can be seen as a linear processing of the special “relay”. Under fixed power allocation, the linear processing of AF relays can be regarded as a permutation matrix. Employing the partitioned matrix, we propose an optimal linear processing design for AF relays to find the optimal permutation matrix based on the sorting of the received SNR over the subcarriers from BS to relays and from relays to users, respectively. Then, we prove the optimality of the proposed linear processing scheme. Through the proposed linear processing scheme, we can obtain the optimal subcarrier paring, relay selection and channel assignment under given power allocation in polynomial time. Finally, we propose an iterative algorithm based on the proposed linear processing scheme and Lagrange dual domain method to jointly optimize the joint optimization problem involving the subcarrier paring, relay selection, channel assignment and power allocation. Simulation results illustrate that the proposed algorithm can achieve a perfect performance.

  3. High-throughput bioinformatics with the Cyrille2 pipeline system.

    NARCIS (Netherlands)

    Fiers, M.W.E.J.; Burgt, van der A.; Datema, E.; Groot, de J.C.W.; Ham, van R.C.H.J.

    2008-01-01

    Background - Modern omics research involves the application of high-throughput technologies that generate vast volumes of data. These data need to be pre-processed, analyzed and integrated with existing knowledge through the use of diverse sets of software tools, models and databases. The analyses

  4. A microliter-scale high-throughput screening system with quantum-dot nanoprobes for amyloid-β aggregation inhibitors.

    Directory of Open Access Journals (Sweden)

    Yukako Ishigaki

    Full Text Available The aggregation of amyloid β protein (Aβ is a key step in the pathogenesis of Alzheimer's disease (AD, and therefore inhibitory substances for Aβ aggregation may have preventive and/or therapeutic potential for AD. Here we report a novel microliter-scale high-throughput screening system for Aβ aggregation inhibitors based on fluorescence microscopy-imaging technology with quantum-dot Nanoprobes. This screening system could be analyzed with a 5-µl sample volume when a 1536-well plate was used, and the inhibitory activity could be estimated as half-maximal effective concentrations (EC50. We attempted to comprehensively screen Aβ aggregation inhibitors from 52 spices using this system to assess whether this novel screening system is actually useful for screening inhibitors. Screening results indicate that approximately 90% of the ethanolic extracts from the spices showed inhibitory activity for Aβ aggregation. Interestingly, spices belonging to the Lamiaceae, the mint family, showed significantly higher activity than the average of tested spices. Furthermore, we tried to isolate the main inhibitory compound from Saturejahortensis, summer savory, a member of the Lamiaceae, using this system, and revealed that the main active compound was rosmarinic acid. These results demonstrate that this novel microliter-scale high-throughput screening system could be applied to the actual screening of Aβ aggregation inhibitors. Since this system can analyze at a microscopic scale, it is likely that further minimization of the system would easily be possible such as protein microarray technology.

  5. High-throughput optical system for HDES hyperspectral imager

    Science.gov (United States)

    Václavík, Jan; Melich, Radek; Pintr, Pavel; Pleštil, Jan

    2015-01-01

    Affordable, long-wave infrared hyperspectral imaging calls for use of an uncooled FPA with high-throughput optics. This paper describes the design of the optical part of a stationary hyperspectral imager in a spectral range of 7-14 um with a field of view of 20°×10°. The imager employs a push-broom method made by a scanning mirror. High throughput and a demand for simplicity and rigidity led to a fully refractive design with highly aspheric surfaces and off-axis positioning of the detector array. The design was optimized to exploit the machinability of infrared materials by the SPDT method and a simple assemblage.

  6. Nonlinear Impairment Compensation Using Expectation Maximization for PDM 16-QAM Systems

    DEFF Research Database (Denmark)

    Zibar, Darko; Winther, Ole; Franceschi, Niccolo

    2012-01-01

    We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth....

  7. Maximal load of the vitamin B12 transport system

    DEFF Research Database (Denmark)

    Lildballe, Dorte L; Mutti, Elena; Birn, Henrik

    2012-01-01

    Several studies suggest that the vitamin B12 (B12) transport system can be used for the cellular delivery of B12-conjugated drugs, also in long-term treatment Whether this strategy will affect the endogenous metabolism of B12 is not known. To study the effect of treatment with excess B12...

  8. Prioritizing Training To Maximize Results: The 3 Box System.

    Science.gov (United States)

    Kearns, Paul

    2003-01-01

    Considers fundamentals of effective training and focuses on the evaluation of training. Describes the 3 Box System, which provides a framework for discussing: (1) basic training needs and priorities; (2) added value training, including ROI (return on investment); evaluation; and (3) prioritizing training budgets. (LRW)

  9. Bit rate maximization for multicast LP-OFDM systems in PLC context

    OpenAIRE

    Maiga , Ali; Baudais , Jean-Yves; Hélard , Jean-François

    2009-01-01

    ISBN: 978-88-900984-8-2.; International audience; In this paper, we propose a new resource allocation algorithm based on linear precoding technique for multicast OFDM systems. Linear precoding technique applied to OFDM systems has already proved its ability to significantly increase the system throughput in a powerline communication (PLC) context. Simulations through PLC channels show that this algorithm outperforms the classical multicast method (up to 7.3% bit rate gain) and gives better pe...

  10. Topology Optimization of a Vibrating System of Rigid and Flexible Bodies for Maximizing Repeated Eigenfrequencies

    International Nuclear Information System (INIS)

    Ahn, Byungseong; Kim, Suh In; Kim, Yoon Young

    2016-01-01

    When a system consisting of rigid and flexible bodies is optimized to improve its dynamic characteristics, its eigenfrequencies are typically maximized. While topology optimization formulations dealing with simultaneous design of a system of rigid and flexible bodies are available, studies on eigenvalue maximization of the system are rare. In particular, no work has solved for the case when the target frequency becomes one of the repeated eigenfrequencies. The problem involving repeated eigenfrequencies is solved in this study, and a topology optimization formulation and sensitivity analysis are presented. Further, several numerical case studies are considered to demonstrate the validity of the proposed formulation

  11. Tissue P Systems With Channel States Working in the Flat Maximally Parallel Way.

    Science.gov (United States)

    Song, Bosheng; Perez-Jimenez, Mario J; Paun, Gheorghe; Pan, Linqiang

    2016-10-01

    Tissue P systems with channel states are a class of bio-inspired parallel computational models, where rules are used in a sequential manner (on each channel, at most one rule can be used at each step). In this work, tissue P systems with channel states working in a flat maximally parallel way are considered, where at each step, on each channel, a maximal set of applicable rules that pass from a given state to a unique next state, is chosen and each rule in the set is applied once. The computational power of such P systems is investigated. Specifically, it is proved that tissue P systems with channel states and antiport rules of length two are able to compute Parikh sets of finite languages, and such P systems with one cell and noncooperative symport rules can compute at least all Parikh sets of matrix languages. Some Turing universality results are also provided. Moreover, the NP-complete problem SAT is solved by tissue P systems with channel states, cell division and noncooperative symport rules working in the flat maximally parallel way; nevertheless, if channel states are not used, then such P systems working in the flat maximally parallel way can solve only tractable problems. These results show that channel states provide a frontier of tractability between efficiency and non-efficiency in the framework of tissue P systems with cell division (assuming P ≠ NP ).

  12. Maximizing the information transfer in a quantum-limited light-scattering system

    DEFF Research Database (Denmark)

    Lading, Lars; Jørgensen, Thomas Martini

    1990-01-01

    A quantum-limited light-scattering system is considered. The spatial configuration that maximizes a given figure of merit is investigated, assuming that the emitted light has Poisson photon statistics. A specific system for measuring the velocity of a small particle is considered as an example. A...

  13. A production throughput forecasting system in an automated hard disk drive test operation using GRNN

    Energy Technology Data Exchange (ETDEWEB)

    Samattapapong, N.; Afzulpurkar, N.

    2016-07-01

    The goal of this paper is to develop a pragmatic system of a production throughput forecasting system for an automated test operation in a hard drive manufacturing plant. The accurate forecasting result is necessary for the management team to response to any changes in the production processes and the resources allocations. In this study, we design a production throughput forecasting system in an automated test operation in hard drive manufacturing plant. In the proposed system, consists of three main stages. In the first stage, a mutual information method was adopted for selecting the relevant inputs into the forecasting model. In the second stage, a generalized regression neural network (GRNN) was implemented in the forecasting model development phase. Finally, forecasting accuracy was improved by searching the optimal smoothing parameter which selected from comparisons result among three optimization algorithms: particle swarm optimization (PSO), unrestricted search optimization (USO) and interval halving optimization (IHO). The experimental result shows that (1) the developed production throughput forecasting system using GRNN is able to provide forecasted results close to actual values, and to projected the future trends of production throughput in an automated hard disk drive test operation; (2) An IHO algorithm performed as superiority appropriate optimization method than the other two algorithms. (3) Compared with current forecasting system in manufacturing, the results show that the proposed system’s performance is superior to the current system in prediction accuracy and suitable for real-world application. The production throughput volume is a key performance index of hard disk drive manufacturing systems that need to be forecast. Because of the production throughput forecasting result is useful information for management team to respond to any changing in production processes and resources allocation. However, a practically forecasting system for

  14. BER and total throughput of asynchronous DS-OCDMA/WDM systems with multiple user interference

    OpenAIRE

    Ghiringhelli, F.; Zervas, M.N.

    2003-01-01

    The BER and throughput of Direct-Sequence OCDMA/WDM systems based on quadripolar codes and superstructured fiber Bragg gratings are statistically derived under asynchronous operation, intensity detection, and Multiple User Interference. Performance improvements with Forward Error Correction are included.

  15. Maximal dissipation and well-posedness for the compressible Euler system

    Czech Academy of Sciences Publication Activity Database

    Feireisl, Eduard

    2014-01-01

    Roč. 16, č. 3 (2014), s. 447-461 ISSN 1422-6928 EU Projects: European Commission(XE) 320078 - MATHEF Keywords : maximal dissipation * compressible Euler system * weak solution Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2014 http://link.springer.com/article/10.1007/s00021-014-0163-8

  16. PARAMETRIZATION OF INNER STRUCTURE OF AGRICULTURAL SYSTEMS ON THE BASIS OF MAXIMAL YIELDS ISOLINES (ISOCARPS

    Directory of Open Access Journals (Sweden)

    K KUDRNA

    2004-07-01

    Full Text Available On the basis of analysis of yield time series from a ten-year period, isolines of maximal yields of crops (isocarps have been constructed, homogenized yield zones have been determined, and inner structures of the agricultural system have been calculated. The algorithm of a normal and an optimal structure calculation have been used, and differences in the structure of the agricultural system have been determined for every defi ned zone.

  17. Linear maps preserving maximal deviation and the Jordan structure of quantum systems

    International Nuclear Information System (INIS)

    Hamhalter, Jan

    2012-01-01

    In the algebraic approach to quantum theory, a quantum observable is given by an element of a Jordan algebra and a state of the system is modelled by a normalized positive functional on the underlying algebra. Maximal deviation of a quantum observable is the largest statistical deviation one can obtain in a particular state of the system. The main result of the paper shows that each linear bijective transformation between JBW algebras preserving maximal deviations is formed by a Jordan isomorphism or a minus Jordan isomorphism perturbed by a linear functional multiple of an identity. It shows that only one numerical statistical characteristic has the power to determine the Jordan algebraic structure completely. As a consequence, we obtain that only very special maps can preserve the diameter of the spectra of elements. Nonlinear maps preserving the pseudometric given by maximal deviation are also described. The results generalize hitherto known theorems on preservers of maximal deviation in the case of self-adjoint parts of von Neumann algebras proved by Molnár.

  18. Gateway-compatible vectors for high-throughput protein expression in pro- and eukaryotic cell-free systems.

    Science.gov (United States)

    Gagoski, Dejan; Mureev, Sergey; Giles, Nichole; Johnston, Wayne; Dahmer-Heath, Mareike; Škalamera, Dubravka; Gonda, Thomas J; Alexandrov, Kirill

    2015-02-10

    Although numerous techniques for protein expression and production are available the pace of genome sequencing outstrips our ability to analyze the encoded proteins. To address this bottleneck, we have established a system for parallelized cloning, DNA production and cell-free expression of large numbers of proteins. This system is based on a suite of pCellFree Gateway destination vectors that utilize a Species Independent Translation Initiation Sequence (SITS) that mediates recombinant protein expression in any in vitro translation system. These vectors introduce C or N terminal EGFP and mCherry fluorescent and affinity tags, enabling direct analysis and purification of the expressed proteins. To maximize throughput and minimize the cost of protein production we combined Gateway cloning with Rolling Circle DNA Amplification. We demonstrate that as little as 0.1 ng of plasmid DNA is sufficient for template amplification and production of recombinant human protein in Leishmania tarentolae and Escherichia coli cell-free expression systems. Our experiments indicate that this approach can be applied to large gene libraries as it can be reliably performed in multi-well plates. The resulting protein expression pipeline provides a valuable new tool for applications of the post genomic era. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. A High-Throughput, High-Accuracy System-Level Simulation Framework for System on Chips

    Directory of Open Access Journals (Sweden)

    Guanyi Sun

    2011-01-01

    Full Text Available Today's System-on-Chips (SoCs design is extremely challenging because it involves complicated design tradeoffs and heterogeneous design expertise. To explore the large solution space, system architects have to rely on system-level simulators to identify an optimized SoC architecture. In this paper, we propose a system-level simulation framework, System Performance Simulation Implementation Mechanism, or SPSIM. Based on SystemC TLM2.0, the framework consists of an executable SoC model, a simulation tool chain, and a modeling methodology. Compared with the large body of existing research in this area, this work is aimed at delivering a high simulation throughput and, at the same time, guaranteeing a high accuracy on real industrial applications. Integrating the leading TLM techniques, our simulator can attain a simulation speed that is not slower than that of the hardware execution by a factor of 35 on a set of real-world applications. SPSIM incorporates effective timing models, which can achieve a high accuracy after hardware-based calibration. Experimental results on a set of mobile applications proved that the difference between the simulated and measured results of timing performance is within 10%, which in the past can only be attained by cycle-accurate models.

  20. Systemic inflammatory responses to maximal versus submaximal lengthening contractions of the elbow flexors.

    Science.gov (United States)

    Peake, Jonathan M; Nosaka, Kazunori; Muthalib, Makii; Suzuki, Katsuhiko

    2006-01-01

    We compared changes in markers of muscle damage and systemic inflammation after submaximal and maximal lengthening muscle contractions of the elbow flexors. Using a cross-over design, 10 healthy young men not involved in resistance training completed a submaximal trial (10 sets of 60 lengthening contractions at 10% maximum isometric strength, 1 min rest between sets), followed by a maximal trial (10 sets of three lengthening contractions at 100% maximum isometric strength, 3 min rest between sets). Lengthening contractions were performed on an isokinetic dynamometer. Opposite arms were used for the submaximal and maximal trials, and the trials were separated by a minimum of two weeks. Blood was sampled before, immediately after, 1 h, 3 h, and 1-4 d after each trial. Total leukocyte and neutrophil numbers, and the serum concentration of soluble tumor necrosis factor-alpha receptor 1 were elevated after both trials (P < 0.01), but there were no differences between the trials. Serum IL-6 concentration was elevated 3 h after the submaximal contractions (P < 0.01). The concentrations of serum tumor necrosis factor-alpha, IL-1 receptor antagonist, IL-10, granulocyte-colony stimulating factor and plasma C-reactive protein remained unchanged following both trials. Maximum isometric strength and range of motion decreased significantly (P < 0.001) after both trials, and were lower from 1-4 days after the maximal contractions compared to the submaximal contractions. Plasma myoglobin concentration and creatine kinase activity, muscle soreness and upper arm circumference all increased after both trials (P < 0.01), but were not significantly different between the trials. Therefore, there were no differences in markers of systemic inflammation, despite evidence of greater muscle damage following maximal versus submaximal lengthening contractions of the elbow flexors.

  1. High Throughput Line-of-Sight MIMO Systems for Next Generation Backhaul Applications

    Science.gov (United States)

    Song, Xiaohang; Cvetkovski, Darko; Hälsig, Tim; Rave, Wolfgang; Fettweis, Gerhard; Grass, Eckhard; Lankl, Berthold

    2017-09-01

    The evolution to ultra-dense next generation networks requires a massive increase in throughput and deployment flexibility. Therefore, novel wireless backhaul solutions that can support these demands are needed. In this work we present an approach for a millimeter wave line-of-sight MIMO backhaul design, targeting transmission rates in the order of 100 Gbit/s. We provide theoretical foundations for the concept showcasing its potential, which are confirmed through channel measurements. Furthermore, we provide insights into the system design with respect to antenna array setup, baseband processing, synchronization, and channel equalization. Implementation in a 60 GHz demonstrator setup proves the feasibility of the system concept for high throughput backhauling in next generation networks.

  2. Development of Microfluidic Systems Enabling High-Throughput Single-Cell Protein Characterization

    OpenAIRE

    Fan, Beiyuan; Li, Xiufeng; Chen, Deyong; Peng, Hongshang; Wang, Junbo; Chen, Jian

    2016-01-01

    This article reviews recent developments in microfluidic systems enabling high-throughput characterization of single-cell proteins. Four key perspectives of microfluidic platforms are included in this review: (1) microfluidic fluorescent flow cytometry; (2) droplet based microfluidic flow cytometry; (3) large-array micro wells (microengraving); and (4) large-array micro chambers (barcode microchips). We examine the advantages and limitations of each technique and discuss future research oppor...

  3. Maximal violation of Clauser-Horne-Shimony-Holt inequality for four-level systems

    International Nuclear Information System (INIS)

    Fu Libin; Chen Jingling; Chen Shigang

    2004-01-01

    Clauser-Horne-Shimony-Holt inequality for bipartite systems of four dimensions is studied in detail by employing the unbiased eight-port beam splitters measurements. The uniform formulas for the maximum and minimum values of this inequality for such measurements are obtained. Based on these formulas, we show that an optimal nonmaximally entangled state is about 6% more resistant to noise than the maximally entangled one. We also give the optimal state and the optimal angles which are important for experimental realization

  4. Uplink SDMA with Limited Feedback: Throughput Scaling

    Directory of Open Access Journals (Sweden)

    Jeffrey G. Andrews

    2008-01-01

    Full Text Available Combined space division multiple access (SDMA and scheduling exploit both spatial multiplexing and multiuser diversity, increasing throughput significantly. Both SDMA and scheduling require feedback of multiuser channel sate information (CSI. This paper focuses on uplink SDMA with limited feedback, which refers to efficient techniques for CSI quantization and feedback. To quantify the throughput of uplink SDMA and derive design guidelines, the throughput scaling with system parameters is analyzed. The specific parameters considered include the numbers of users, antennas, and feedback bits. Furthermore, different SNR regimes and beamforming methods are considered. The derived throughput scaling laws are observed to change for different SNR regimes. For instance, the throughput scales logarithmically with the number of users in the high SNR regime but double logarithmically in the low SNR regime. The analysis of throughput scaling suggests guidelines for scheduling in uplink SDMA. For example, to maximize throughput scaling, scheduling should use the criterion of minimum quantization errors for the high SNR regime and maximum channel power for the low SNR regime.

  5. Solar photovoltaic system design optimization by shading analysis to maximize energy generation from limited urban area

    International Nuclear Information System (INIS)

    Rachchh, Ravi; Kumar, Manoj; Tripathi, Brijesh

    2016-01-01

    Highlights: • Scheme to maximize total number of solar panels in a given area. • Enhanced energy output from a fixed area without compromising the efficiency. • Capacity and generated energy are enhanced by more than 25%. - Abstract: In the urban areas the demand of solar power is increasing due to better awareness about the emission of green house gases from conventional thermal power plants and significant decrease in the installation cost of residential solar power plants. But the land cost and the under utilization of available space is hindering its further growth. Under these circumstances, solar photovoltaic system installation needs to accommodate the maximum number of solar panels in either roof-top or land-mounted category. In this article a new approach is suggested to maximize the total number of solar panels in a given area with enhanced energy output without compromising the overall efficiency of the system. The number of solar panels can be maximized in a solar photovoltaic energy generation system by optimizing installation parameters such as tilt angle, pitch, gain factor, altitude angle and shading to improve the energy yield. In this paper mathematical analysis is done to show that the capacity and generated energy can be enhanced by more than 25% for a given land area by optimization various parameters.

  6. A Fully Automated High-Throughput Flow Cytometry Screening System Enabling Phenotypic Drug Discovery.

    Science.gov (United States)

    Joslin, John; Gilligan, James; Anderson, Paul; Garcia, Catherine; Sharif, Orzala; Hampton, Janice; Cohen, Steven; King, Miranda; Zhou, Bin; Jiang, Shumei; Trussell, Christopher; Dunn, Robert; Fathman, John W; Snead, Jennifer L; Boitano, Anthony E; Nguyen, Tommy; Conner, Michael; Cooke, Mike; Harris, Jennifer; Ainscow, Ed; Zhou, Yingyao; Shaw, Chris; Sipes, Dan; Mainquist, James; Lesley, Scott

    2018-05-01

    The goal of high-throughput screening is to enable screening of compound libraries in an automated manner to identify quality starting points for optimization. This often involves screening a large diversity of compounds in an assay that preserves a connection to the disease pathology. Phenotypic screening is a powerful tool for drug identification, in that assays can be run without prior understanding of the target and with primary cells that closely mimic the therapeutic setting. Advanced automation and high-content imaging have enabled many complex assays, but these are still relatively slow and low throughput. To address this limitation, we have developed an automated workflow that is dedicated to processing complex phenotypic assays for flow cytometry. The system can achieve a throughput of 50,000 wells per day, resulting in a fully automated platform that enables robust phenotypic drug discovery. Over the past 5 years, this screening system has been used for a variety of drug discovery programs, across many disease areas, with many molecules advancing quickly into preclinical development and into the clinic. This report will highlight a diversity of approaches that automated flow cytometry has enabled for phenotypic drug discovery.

  7. New high-throughput material-exploration system based on combinatorial chemistry and electrostatic atomization

    International Nuclear Information System (INIS)

    Fujimoto, K.; Takahashi, H.; Ito, S.; Inoue, S.; Watanabe, M.

    2006-01-01

    As a tool to facilitate future material explorations, our group has developed a new combinatorial system for the high-throughput preparation of compounds made up of more than three components. The system works in two steps: the atomization of a liquid by a high electric field followed by deposition to a grounded substrate. The combinatorial system based on this method has plural syringe pumps. The each starting materials are fed through the syringe pumps into a manifold, thoroughly mixed as they pass through the manifold, and atomized from the tip of a stainless steel nozzle onto a grounded substrate

  8. Optimization of cascade hydropower system operation by genetic algorithm to maximize clean energy output

    Directory of Open Access Journals (Sweden)

    Aida Tayebiyan

    2016-06-01

    Full Text Available Background: Several reservoir systems have been constructed for hydropower generation around the world. Hydropower offers an economical source of electricity with reduce carbon emissions. Therefore, it is such a clean and renewable source of energy. Reservoirs that generate hydropower are typically operated with the goal of maximizing energy revenue. Yet, reservoir systems are inefficiently operated and manage according to policies determined at the construction time. It is worth noting that with little enhancement in operation of reservoir system, there could be an increase in efficiency of the scheme for many consumers. Methods: This research develops simulation-optimization models that reflect discrete hedging policy (DHP to manage and operate hydropower reservoir system and analyse it in both single and multireservoir system. Accordingly, three operational models (2 single reservoir systems and 1 multi-reservoir system were constructed and optimized by genetic algorithm (GA. Maximizing the total power generation in horizontal time is chosen as an objective function in order to improve the functional efficiency in hydropower production with consideration to operational and physical limitations. The constructed models, which is a cascade hydropower reservoirs system have been tested and evaluated in the Cameron Highland and Batang Padang in Malaysia. Results: According to the given results, usage of DHP for hydropower reservoir system operation could increase the power generation output to nearly 13% in the studied reservoir system compared to present operating policy (TNB operation. This substantial increase in power production will enhance economic development. Moreover, the given results of single and multi-reservoir systems affirmed that hedging policy could manage the single system much better than operation of the multi-reservoir system. Conclusion: It can be summarized that DHP is an efficient and feasible policy, which could be used

  9. Output power maximization of low-power wind energy conversion systems revisited: Possible control solutions

    Energy Technology Data Exchange (ETDEWEB)

    Vlad, Ciprian; Munteanu, Iulian; Bratcu, Antoneta Iuliana; Ceanga, Emil [' ' Dunarea de Jos' ' University of Galati, 47, Domneasca, 800008-Galati (Romania)

    2010-02-15

    This paper discusses the problem of output power maximization for low-power wind energy conversion systems operated in partial load. These systems are generally based on multi-polar permanent-magnet synchronous generators, who exhibit significant efficiency variations over the operating range. Unlike the high-power systems, whose mechanical-to-electrical conversion efficiency is high and practically does not modify the global optimum, the low-power systems global conversion efficiency is affected by the generator behavior and the electrical power optimization is no longer equivalent with the mechanical power optimization. The system efficiency has been analyzed by using both the maxima locus of the mechanical power versus the rotational speed characteristics, and the maxima locus of the electrical power delivered versus the rotational speed characteristics. The experimental investigation has been carried out by using a torque-controlled generator taken from a real-world wind turbine coupled to a physically simulated wind turbine rotor. The experimental results indeed show that the steady-state performance of the conversion system is strongly determined by the generator behavior. Some control solutions aiming at maximizing the energy efficiency are envisaged and thoroughly compared through experimental results. (author)

  10. Output power maximization of low-power wind energy conversion systems revisited: Possible control solutions

    International Nuclear Information System (INIS)

    Vlad, Ciprian; Munteanu, Iulian; Bratcu, Antoneta Iuliana; Ceanga, Emil

    2010-01-01

    This paper discusses the problem of output power maximization for low-power wind energy conversion systems operated in partial load. These systems are generally based on multi-polar permanent-magnet synchronous generators, who exhibit significant efficiency variations over the operating range. Unlike the high-power systems, whose mechanical-to-electrical conversion efficiency is high and practically does not modify the global optimum, the low-power systems global conversion efficiency is affected by the generator behavior and the electrical power optimization is no longer equivalent with the mechanical power optimization. The system efficiency has been analyzed by using both the maxima locus of the mechanical power versus the rotational speed characteristics, and the maxima locus of the electrical power delivered versus the rotational speed characteristics. The experimental investigation has been carried out by using a torque-controlled generator taken from a real-world wind turbine coupled to a physically simulated wind turbine rotor. The experimental results indeed show that the steady-state performance of the conversion system is strongly determined by the generator behavior. Some control solutions aiming at maximizing the energy efficiency are envisaged and thoroughly compared through experimental results.

  11. Robofurnace: A semi-automated laboratory chemical vapor deposition system for high-throughput nanomaterial synthesis and process discovery

    International Nuclear Information System (INIS)

    Oliver, C. Ryan; Westrick, William; Koehler, Jeremy; Brieland-Shoultz, Anna; Anagnostopoulos-Politis, Ilias; Cruz-Gonzalez, Tizoc; Hart, A. John

    2013-01-01

    Laboratory research and development on new materials, such as nanostructured thin films, often utilizes manual equipment such as tube furnaces due to its relatively low cost and ease of setup. However, these systems can be prone to inconsistent outcomes due to variations in standard operating procedures and limitations in performance such as heating and cooling rates restrict the parameter space that can be explored. Perhaps more importantly, maximization of research throughput and the successful and efficient translation of materials processing knowledge to production-scale systems, relies on the attainment of consistent outcomes. In response to this need, we present a semi-automated lab-scale chemical vapor deposition (CVD) furnace system, called “Robofurnace.” Robofurnace is an automated CVD system built around a standard tube furnace, which automates sample insertion and removal and uses motion of the furnace to achieve rapid heating and cooling. The system has a 10-sample magazine and motorized transfer arm, which isolates the samples from the lab atmosphere and enables highly repeatable placement of the sample within the tube. The system is designed to enable continuous operation of the CVD reactor, with asynchronous loading/unloading of samples. To demonstrate its performance, Robofurnace is used to develop a rapid CVD recipe for carbon nanotube (CNT) forest growth, achieving a 10-fold improvement in CNT forest mass density compared to a benchmark recipe using a manual tube furnace. In the long run, multiple systems like Robofurnace may be linked to share data among laboratories by methods such as Twitter. Our hope is Robofurnace and like automation will enable machine learning to optimize and discover relationships in complex material synthesis processes

  12. Robofurnace: A semi-automated laboratory chemical vapor deposition system for high-throughput nanomaterial synthesis and process discovery

    Energy Technology Data Exchange (ETDEWEB)

    Oliver, C. Ryan; Westrick, William; Koehler, Jeremy; Brieland-Shoultz, Anna; Anagnostopoulos-Politis, Ilias; Cruz-Gonzalez, Tizoc [Department of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan 48109 (United States); Hart, A. John, E-mail: ajhart@mit.edu [Department of Mechanical Engineering, University of Michigan, Ann Arbor, Michigan 48109 (United States); Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139 (United States)

    2013-11-15

    Laboratory research and development on new materials, such as nanostructured thin films, often utilizes manual equipment such as tube furnaces due to its relatively low cost and ease of setup. However, these systems can be prone to inconsistent outcomes due to variations in standard operating procedures and limitations in performance such as heating and cooling rates restrict the parameter space that can be explored. Perhaps more importantly, maximization of research throughput and the successful and efficient translation of materials processing knowledge to production-scale systems, relies on the attainment of consistent outcomes. In response to this need, we present a semi-automated lab-scale chemical vapor deposition (CVD) furnace system, called “Robofurnace.” Robofurnace is an automated CVD system built around a standard tube furnace, which automates sample insertion and removal and uses motion of the furnace to achieve rapid heating and cooling. The system has a 10-sample magazine and motorized transfer arm, which isolates the samples from the lab atmosphere and enables highly repeatable placement of the sample within the tube. The system is designed to enable continuous operation of the CVD reactor, with asynchronous loading/unloading of samples. To demonstrate its performance, Robofurnace is used to develop a rapid CVD recipe for carbon nanotube (CNT) forest growth, achieving a 10-fold improvement in CNT forest mass density compared to a benchmark recipe using a manual tube furnace. In the long run, multiple systems like Robofurnace may be linked to share data among laboratories by methods such as Twitter. Our hope is Robofurnace and like automation will enable machine learning to optimize and discover relationships in complex material synthesis processes.

  13. Definition, modeling and simulation of a grid computing system for high throughput computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper, we study and compare grid and global computing systems and outline the benefits of having an hybrid system called dirac. To evaluate the dirac scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called dirac in a high throughput context. We justify our decentralized, adaptive and oppor! tunistic approach in comparison to a centralize...

  14. Cell Culture Microfluidic Biochips: Experimental Throughput Maximization

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan; Pop, Paul; Madsen, Jan

    2011-01-01

    Microfluidic biochips offer a promising alternative to a conventional biochemical laboratory, integrating all necessary functionalities on-chip in order to perform biochemical applications. Researchers have started to propose computer-aided design tools for the synthesis of such biochips. Our focus...... metaheuristic for experimental design generation for the cell culture microfluidic biochips, and we have evaluated our approach using multiple experimental setups....

  15. Entropy Maximization as a Basis for Information Recovery in Dynamic Economic Behavioral Systems

    Directory of Open Access Journals (Sweden)

    George Judge

    2015-02-01

    Full Text Available As a basis for information recovery in open dynamic microeconomic systems, we emphasize the connection between adaptive intelligent behavior, causal entropy maximization and self-organized equilibrium seeking behavior. This entropy-based causal adaptive behavior framework permits the use of information-theoretic methods as a solution basis for the resulting pure and stochastic inverse economic-econometric problems. We cast the information recovery problem in the form of a binary network and suggest information-theoretic methods to recover estimates of the unknown binary behavioral parameters without explicitly sampling the configuration-arrangement of the sample space.

  16. Using Computer Simulation Method to Improve Throughput of Production Systems by Buffers and Workers Allocation

    Directory of Open Access Journals (Sweden)

    Kłos Sławomir

    2015-12-01

    Full Text Available This paper proposes the application of computer simulation methods to support decision making regarding intermediate buffer allocations in a series-parallel production line. The simulation model of the production system is based on a real example of a manufacturing company working in the automotive industry. Simulation experiments were conducted for different allocations of buffer capacities and different numbers of employees. The production system consists of three technological operations with intermediate buffers between each operation. The technological operations are carried out using machines and every machine can be operated by one worker. Multi-work in the production system is available (one operator operates several machines. On the basis of the simulation experiments, the relationship between system throughput, buffer allocation and the number of employees is analyzed. Increasing the buffer capacity results in an increase in the average product lifespan. Therefore, in the article a new index is proposed that includes the throughput of the manufacturing system and product life span. Simulation experiments were performed for different configurations of technological operations.

  17. OptoDyCE: Automated system for high-throughput all-optical dynamic cardiac electrophysiology

    Science.gov (United States)

    Klimas, Aleksandra; Yu, Jinzhu; Ambrosi, Christina M.; Williams, John C.; Bien, Harold; Entcheva, Emilia

    2016-02-01

    In the last two decades, market were due to cardiac toxicity, where unintended interactions with ion channels disrupt the heart's normal electrical function. Consequently, all new drugs must undergo preclinical testing for cardiac liability, adding to an already expensive and lengthy process. Recognition that proarrhythmic effects often result from drug action on multiple ion channels demonstrates a need for integrative and comprehensive measurements. Additionally, patient-specific therapies relying on emerging technologies employing stem-cell derived cardiomyocytes (e.g. induced pluripotent stem-cell-derived cardiomyocytes, iPSC-CMs) require better screening methods to become practical. However, a high-throughput, cost-effective approach for cellular cardiac electrophysiology has not been feasible. Optical techniques for manipulation and recording provide a contactless means of dynamic, high-throughput testing of cells and tissues. Here, we consider the requirements for all-optical electrophysiology for drug testing, and we implement and validate OptoDyCE, a fully automated system for all-optical cardiac electrophysiology. We demonstrate the high-throughput capabilities using multicellular samples in 96-well format by combining optogenetic actuation with simultaneous fast high-resolution optical sensing of voltage or intracellular calcium. The system can also be implemented using iPSC-CMs and other cell-types by delivery of optogenetic drivers, or through the modular use of dedicated light-sensitive somatic cells in conjunction with non-modified cells. OptoDyCE provides a truly modular and dynamic screening system, capable of fully-automated acquisition of high-content information integral for improved discovery and development of new drugs and biologics, as well as providing a means of better understanding of electrical disturbances in the heart.

  18. Power Maximization Control of Variable Speed Wind Generation System Using Permanent Magnet Synchronous Generator

    Science.gov (United States)

    Morimoto, Shigeo; Nakamura, Tomohiko; Takeda, Yoji

    This paper proposes the sensorless output power maximization control of the wind generation system. A permanent magnet synchronous generator (PMSG) is used as a variable speed generator in the proposed system. The generator torque is suitably controlled according to the generator speed and thus the power from a wind turbine settles down on the maximum power point by the proposed MPPT control method, where the information of wind velocity is not required. Moreover, the maximum available generated power is obtained by the optimum current vector control. The current vector of PMSG is optimally controlled according to the generator speed and the required torque in order to minimize the losses of PMSG considering the voltage and current constraints. The proposed wind power generation system can be achieved without mechanical sensors such as a wind velocity detector and a position sensor. Several experimental results show the effectiveness of the proposed control method.

  19. Engineering a vitamin B12 high-throughput screening system by riboswitch sensor in Sinorhizobium meliloti.

    Science.gov (United States)

    Cai, Yingying; Xia, Miaomiao; Dong, Huina; Qian, Yuan; Zhang, Tongcun; Zhu, Beiwei; Wu, Jinchuan; Zhang, Dawei

    2018-05-11

    As a very important coenzyme in the cell metabolism, Vitamin B 12 (cobalamin, VB 12 ) has been widely used in food and medicine fields. The complete biosynthesis of VB 12 requires approximately 30 genes, but overexpression of these genes did not result in expected increase of VB 12 production. High-yield VB 12 -producing strains are usually obtained by mutagenesis treatments, thus developing an efficient screening approach is urgently needed. By the help of engineered strains with varied capacities of VB 12 production, a riboswitch library was constructed and screened, and the btuB element from Salmonella typhimurium was identified as the best regulatory device. A flow cytometry high-throughput screening system was developed based on the btuB riboswitch with high efficiency to identify positive mutants. Mutation of Sinorhizobium meliloti (S. meliloti) was optimized using the novel mutation technique of atmospheric and room temperature plasma (ARTP). Finally, the mutant S. meliloti MC5-2 was obtained and considered as a candidate for industrial applications. After 7 d's cultivation on a rotary shaker at 30 °C, the VB 12 titer of S. meliloti MC5-2 reached 156 ± 4.2 mg/L, which was 21.9% higher than that of the wild type strain S. meliloti 320 (128 ± 3.2 mg/L). The genome of S. meliloti MC5-2 was sequenced, and gene mutations were identified and analyzed. To our knowledge, it is the first time that a riboswitch element was used in S. meliloti. The flow cytometry high-throughput screening system was successfully developed and a high-yield VB 12 producing strain was obtained. The identified and analyzed gene mutations gave useful information for developing high-yield strains by metabolic engineering. Overall, this work provides a useful high-throughput screening method for developing high VB 12 -yield strains.

  20. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  1. Maximally incompatible quantum observables

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario

    2014-01-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  2. Maximizing potential: innovative collaborative strategies between one-stops and mental health systems of care.

    Science.gov (United States)

    Boeltzig, Heike; Timmons, Jaimie Ciulla; Marrone, Joe

    2008-01-01

    Barriers to seamless service delivery between workforce development and mental health systems of care have kept both entities from maximizing their potential in regards to employment for job seekers with mental illness who are capable of work and seeking employment. Using a multiple case study design, this study examined the nature of collaboration between workforce development and mental health systems to understand the policies and practices in place to assist individuals with mental illness to find and keep work. The paper presents innovative strategies that involved staff from both workforce development and mental health agencies. Findings from this research identified the following collaborative strategies: (a) the creation of liaison positions and collaborative teams; (b) staff training on mental health and workforce issues; and (c) multi-level involvement of individuals with mental illness. Implications for workforce professionals are offered as a way to stimulate implementation of such strategies.

  3. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    Directory of Open Access Journals (Sweden)

    Sunil Chinnadurai

    2017-09-01

    Full Text Available In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE maximization problem in a 5G massive multiple-input multiple-output (MIMO-non-orthogonal multiple access (NOMA downlink system with imperfect channel state information (CSI at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM. A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA scheme.

  4. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    Science.gov (United States)

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  5. SNP high-throughput screening in grapevine using the SNPlex™ genotyping system

    Directory of Open Access Journals (Sweden)

    Velasco Riccardo

    2008-01-01

    Full Text Available Abstract Background Until recently, only a small number of low- and mid-throughput methods have been used for single nucleotide polymorphism (SNP discovery and genotyping in grapevine (Vitis vinifera L.. However, following completion of the sequence of the highly heterozygous genome of Pinot Noir, it has been possible to identify millions of electronic SNPs (eSNPs thus providing a valuable source for high-throughput genotyping methods. Results Herein we report the first application of the SNPlex™ genotyping system in grapevine aiming at the anchoring of an eukaryotic genome. This approach combines robust SNP detection with automated assay readout and data analysis. 813 candidate eSNPs were developed from non-repetitive contigs of the assembled genome of Pinot Noir and tested in 90 progeny of Syrah × Pinot Noir cross. 563 new SNP-based markers were obtained and mapped. The efficiency rate of 69% was enhanced to 80% when multiple displacement amplification (MDA methods were used for preparation of genomic DNA for the SNPlex assay. Conclusion Unlike other SNP genotyping methods used to investigate thousands of SNPs in a few genotypes, or a few SNPs in around a thousand genotypes, the SNPlex genotyping system represents a good compromise to investigate several hundred SNPs in a hundred or more samples simultaneously. Therefore, the use of the SNPlex assay, coupled with whole genome amplification (WGA, is a good solution for future applications in well-equipped laboratories.

  6. The Protein Maker: an automated system for high-throughput parallel purification

    International Nuclear Information System (INIS)

    Smith, Eric R.; Begley, Darren W.; Anderson, Vanessa; Raymond, Amy C.; Haffner, Taryn E.; Robinson, John I.; Edwards, Thomas E.; Duncan, Natalie; Gerdts, Cory J.; Mixon, Mark B.; Nollert, Peter; Staker, Bart L.; Stewart, Lance J.

    2011-01-01

    The Protein Maker instrument addresses a critical bottleneck in structural genomics by allowing automated purification and buffer testing of multiple protein targets in parallel with a single instrument. Here, the use of this instrument to (i) purify multiple influenza-virus proteins in parallel for crystallization trials and (ii) identify optimal lysis-buffer conditions prior to large-scale protein purification is described. The Protein Maker is an automated purification system developed by Emerald BioSystems for high-throughput parallel purification of proteins and antibodies. This instrument allows multiple load, wash and elution buffers to be used in parallel along independent lines for up to 24 individual samples. To demonstrate its utility, its use in the purification of five recombinant PB2 C-terminal domains from various subtypes of the influenza A virus is described. Three of these constructs crystallized and one diffracted X-rays to sufficient resolution for structure determination and deposition in the Protein Data Bank. Methods for screening lysis buffers for a cytochrome P450 from a pathogenic fungus prior to upscaling expression and purification are also described. The Protein Maker has become a valuable asset within the Seattle Structural Genomics Center for Infectious Disease (SSGCID) and hence is a potentially valuable tool for a variety of high-throughput protein-purification applications

  7. Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems

    Directory of Open Access Journals (Sweden)

    Weeraddana Chathuranga

    2010-01-01

    Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.

  8. Chemical equilibrium. [maximizing entropy of gas system to derive relations between thermodynamic variables

    Science.gov (United States)

    1976-01-01

    The entropy of a gas system with the number of particles subject to external control is maximized to derive relations between the thermodynamic variables that obtain at equilibrium. These relations are described in terms of the chemical potential, defined as equivalent partial derivatives of entropy, energy, enthalpy, free energy, or free enthalpy. At equilibrium, the change in total chemical potential must vanish. This fact is used to derive the equilibrium constants for chemical reactions in terms of the partition functions of the species involved in the reaction. Thus the equilibrium constants can be determined accurately, just as other thermodynamic properties, from a knowledge of the energy levels and degeneracies for the gas species involved. These equilibrium constants permit one to calculate the equilibrium concentrations or partial pressures of chemically reacting species that occur in gas mixtures at any given condition of pressure and temperature or volume and temperature.

  9. An Optimal Control Method for Maximizing the Efficiency of Direct Drive Ocean Wave Energy Extraction System

    Science.gov (United States)

    Chen, Zhongxian; Yu, Haitao; Wen, Cheng

    2014-01-01

    The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability. PMID:25152913

  10. An optimal control method for maximizing the efficiency of direct drive ocean wave energy extraction system.

    Science.gov (United States)

    Chen, Zhongxian; Yu, Haitao; Wen, Cheng

    2014-01-01

    The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability.

  11. Using ALFA for high throughput, distributed data transmission in the ALICE O2 system

    Science.gov (United States)

    Wegrzynek, A.; ALICE Collaboration

    2017-10-01

    ALICE (A Large Ion Collider Experiment) is a heavy-ion detector designed to study the physics of strongly interacting matter (the Quark-Gluon Plasma at the CERN LHC (Large Hadron Collider). ALICE has been successfully collecting physics data in Run 2 since spring 2015. In parallel, preparations for a major upgrade of the computing system, called O2 (Online-Offline), scheduled for the Long Shutdown 2 in 2019-2020, are being made. One of the major requirements of the system is the capacity to transport data between so-called FLPs (First Level Processors), equipped with readout cards, and the EPNs (Event Processing Node), performing data aggregation, frame building and partial reconstruction. It is foreseen to have 268 FLPs dispatching data to 1500 EPNs with an average output of 20 Gb/s each. In overall, the O2 processing system will operate at terabits per second of throughput while handling millions of concurrent connections. The ALFA framework will standardize and handle software related tasks such as readout, data transport, frame building, calibration, online reconstruction and more in the upgraded computing system. ALFA supports two data transport libraries: ZeroMQ and nanomsg. This paper discusses the efficiency of ALFA in terms of high throughput data transport. The tests were performed with multiple FLPs pushing data to multiple EPNs. The transfer was done using push-pull communication patterns and two socket configurations: bind, connect. The set of benchmarks was prepared to get the most performant results on each hardware setup. The paper presents the measurement process and final results - data throughput combined with computing resources usage as a function of block size. The high number of nodes and connections in the final set up may cause race conditions that can lead to uneven load balancing and poor scalability. The performed tests allow us to validate whether the traffic is distributed evenly over all receivers. It also measures the behaviour of

  12. Ultra-High-Throughput Sample Preparation System for Lymphocyte Immunophenotyping Point-of-Care Diagnostics.

    Science.gov (United States)

    Walsh, David I; Murthy, Shashi K; Russom, Aman

    2016-10-01

    Point-of-care (POC) microfluidic devices often lack the integration of common sample preparation steps, such as preconcentration, which can limit their utility in the field. In this technology brief, we describe a system that combines the necessary sample preparation methods to perform sample-to-result analysis of large-volume (20 mL) biopsy model samples with staining of captured cells. Our platform combines centrifugal-paper microfluidic filtration and an analysis system to process large, dilute biological samples. Utilizing commercialization-friendly manufacturing methods and materials, yielding a sample throughput of 20 mL/min, and allowing for on-chip staining and imaging bring together a practical, yet powerful approach to microfluidic diagnostics of large, dilute samples. © 2016 Society for Laboratory Automation and Screening.

  13. Efficient high-throughput biological process characterization: Definitive screening design with the ambr250 bioreactor system.

    Science.gov (United States)

    Tai, Mitchell; Ly, Amanda; Leung, Inne; Nayar, Gautam

    2015-01-01

    The burgeoning pipeline for new biologic drugs has increased the need for high-throughput process characterization to efficiently use process development resources. Breakthroughs in highly automated and parallelized upstream process development have led to technologies such as the 250-mL automated mini bioreactor (ambr250™) system. Furthermore, developments in modern design of experiments (DoE) have promoted the use of definitive screening design (DSD) as an efficient method to combine factor screening and characterization. Here we utilize the 24-bioreactor ambr250™ system with 10-factor DSD to demonstrate a systematic experimental workflow to efficiently characterize an Escherichia coli (E. coli) fermentation process for recombinant protein production. The generated process model is further validated by laboratory-scale experiments and shows how the strategy is useful for quality by design (QbD) approaches to control strategies for late-stage characterization. © 2015 American Institute of Chemical Engineers.

  14. Identifying genes that extend life span using a high-throughput screening system.

    Science.gov (United States)

    Chen, Cuiying; Contreras, Roland

    2007-01-01

    We developed a high-throughput functional genomic screening system that allows identification of genes prolonging lifespan in the baker's yeast Saccharomyces cerevisiae. The method is based on isolating yeast mother cells with a higher than average number of cell divisions as indicated by the number of bud scars on their surface. Fluorescently labeled wheat germ agglutinin (WGA) was used for specific staining of chitin, a major component of bud scars. The critical new steps in our bud-scar-sorting system are the use of small microbeads, which allows successive rounds of purification and regrowth of the mother cells (M-cell), and utilization of flow cytometry to sort and isolate cells with a longer lifespan based on the number of bud scars specifically labeled with WGA.

  15. Data reduction for a high-throughput neutron activation analysis system

    International Nuclear Information System (INIS)

    Bowman, W.W.

    1979-01-01

    To analyze samples collected as part of a geochemical survey for the National Uranium Resource Evaluation program, Savannah River Laboratory has installed a high-throughput neutron activation analysis system. As part of that system, computer programs have been developed to reduce raw data to elemental concentrations in two steps. Program RAGS reduces gamma-ray spectra to lists of photopeak energies, peak areas, and statistical errors. Program RICHES determines the elemental concentrations from photopeak and delayed-neutron data, detector efficiencies, analysis parameters (neutron flux and activation, decay, and counting times), and spectrometric and cross-section data from libraries. Both programs have been streamlined for on-line operation with a minicomputer, each requiring approx. 64 kbytes of core. 3 tables

  16. Smith machine counterbalance system affects measures of maximal bench press throw performance.

    Science.gov (United States)

    Vingren, Jakob L; Buddhadev, Harsh H; Hill, David W

    2011-07-01

    Equipment with counterbalance weight systems is commonly used for the assessment of performance in explosive resistance exercise movements, but it is not known if such systems affect performance measures. The purpose of this study was to determine the effect of using a counterbalance weight system on measures of smith machine bench press throw performance. Ten men and 14 women (mean ± SD: age, 25 ± 4 years; height, 173 ± 10 cm; weight, 77.7 ± 18.3 kg) completed maximal smith machine bench press throws under 4 different conditions (2 × 2; counterbalance × load): with or without a counterbalance weight system and using 'light' or 'moderate' net barbell loads. Performance variables (peak force, peak velocity, and peak power) were measured using a linear accelerometer attached to the barbell. The counterbalance weight system resulted in significant (p velocity (light: -0.49 ± 0.10 m·s; moderate: -0.33 ± 0.07 m·s), and peak power (light: -220 ± 43 W; moderate: -143 ± 28 W) compared with no counterbalance system for both load conditions. Load condition did not affect absolute or percentage reductions from the counterbalance weight system for any variable. In conclusion, the use of a counterbalance weight system reduces accelerometer-based performance measures for the bench press throw exercise at light and moderate loads. This reduction in measures is likely because of an increase in the external resistance during the movement, which results in a discrepancy between the manually input and the actual value for external load. A counterbalance weight system should not be used when measuring performance in explosive resistance exercises with an accelerometer.

  17. Size matters: Installed maximal unit size predicts market life cycles of electricity generation technologies and systems

    International Nuclear Information System (INIS)

    Li, N.

    2008-01-01

    The electricity generation technologies and systems are complex and change in very dynamic fashions, with a multitude of energy sources and prime movers. Since an important concept in generator design is the 'economies of scale', we discover that the installed maximal unit size (capacity) of the generators is a key 'envelope-pushing' characteristic with logistical behaviors. The logistical wavelet analysis of the max unit sizes for different fuels and prime movers, and the cumulative capacities, reveals universal quantitative features in the aggregate evolution of the power industry. We extract the transition times of the max sizes (spanning 10-90% of the saturation limits) for different technologies and systems, and discover that the max size saturation in the 90-99% range precedes the saturation of cumulative capacities of the corresponding systems in the US. While these universal laws are still empirical, they give us a simple yet elegant framework to examine the evolution of the power industry and markets in predictive, not just descriptive, terms. Such laws give us a quantitative tool to spot trends and predict future development, invaluable in planning and resource allocation based on intrinsic technology and system market life cycles

  18. High-throughput profiling of antibiotic resistance genes in drinking water treatment plants and distribution systems.

    Science.gov (United States)

    Xu, Like; Ouyang, Weiying; Qian, Yanyun; Su, Chao; Su, Jianqiang; Chen, Hong

    2016-06-01

    Antibiotic resistance genes (ARGs) are present in surface water and often cannot be completely eliminated by drinking water treatment plants (DWTPs). Improper elimination of the ARG-harboring microorganisms contaminates the water supply and would lead to animal and human disease. Therefore, it is of utmost importance to determine the most effective ways by which DWTPs can eliminate ARGs. Here, we tested water samples from two DWTPs and distribution systems and detected the presence of 285 ARGs, 8 transposases, and intI-1 by utilizing high-throughput qPCR. The prevalence of ARGs differed in the two DWTPs, one of which employed conventional water treatments while the other had advanced treatment processes. The relative abundance of ARGs increased significantly after the treatment with biological activated carbon (BAC), raising the number of detected ARGs from 76 to 150. Furthermore, the final chlorination step enhanced the relative abundance of ARGs in the finished water generated from both DWTPs. The total enrichment of ARGs varied from 6.4-to 109.2-fold in tap water compared to finished water, among which beta-lactam resistance genes displayed the highest enrichment. Six transposase genes were detected in tap water samples, with the transposase gene TnpA-04 showing the greatest enrichment (up to 124.9-fold). We observed significant positive correlations between ARGs and mobile genetic elements (MGEs) during the distribution systems, indicating that transposases and intI-1 may contribute to antibiotic resistance in drinking water. To our knowledge, this is the first study to investigate the diversity and abundance of ARGs in drinking water treatment systems utilizing high-throughput qPCR techniques in China. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. High-throughput mosquito and fly bioassay system for natural and artificial substrates treated with residual insecticides.

    Science.gov (United States)

    Aldridge, Robert L; Wynn, W Wayne; Britch, Seth C; Allan, Sandra A; Walker, Todd W; Geden, Christopher J; Hogsette, Jerome A; Linthicum, Kenneth J

    2013-03-01

    A high-throughput bioassay system to evaluate the efficacy of residual pesticides against mosquitoes and muscid flies with minimal insect handling was developed. The system consisted of 4 components made of readily available materials: 1) a CO2 anaesthetizing chamber, 2) a specialized aspirator, 3) a cylindrical flat-bottomed glass bioassay chamber assembly, and 4) a customized rack.

  20. Maximizing the energy storage performance of phase change thermal storage systems

    Energy Technology Data Exchange (ETDEWEB)

    Amin, N.A.M.; Bruno, F.; Belusko, M. [South Australia Univ., Mawson Lakes, South Australia (Australia). Inst. for Sustainable Systems and Technologies

    2009-07-01

    The demand for electricity in South Australia is highly influenced by the need for refrigeration and air-conditioning. An extensive literature review has been conducted on the use of phase change materials (PCMs) in thermal storage systems. PCMs use latent heat at the solid-liquid phase transition point to store thermal energy. They are considered to be useful as a thermal energy storage (TES) material because they can provide much higher energy storage densities compared to conventional sensible thermal storage materials. This paper reviewed the main disadvantages of using PCMs for energy storage, such as low heat transfer, super cooling and system design issues. Other issues with PCMs include incongruence and corrosion of heat exchanger surfaces. The authors suggested that in order to address these problems, future research should focus on maximizing heat transfer by optimizing the configuration of the encapsulation through a parametric analysis using a PCM numerical model. The effective conductivity in encapsulated PCMs in a latent heat thermal energy storage (LHTES) system can also be increased by using conductors in the encapsulation that have high thermal conductivity. 47 refs., 1 tab., 1 fig.

  1. Blind Multiuser Detection by Kurtosis Maximization for Asynchronous Multirate DS/CDMA Systems

    Directory of Open Access Journals (Sweden)

    Peng Chun-Hsien

    2006-01-01

    Full Text Available Chi et al. proposed a fast kurtosis maximization algorithm (FKMA for blind equalization/deconvolution of multiple-input multiple-output (MIMO linear time-invariant systems. This algorithm has been applied to blind multiuser detection of single-rate direct-sequence/code-division multiple-access (DS/CDMA systems and blind source separation (or independent component analysis. In this paper, the FKMA is further applied to blind multiuser detection for multirate DS/CDMA systems. The ideas are to properly formulate discrete-time MIMO signal models by converting real multirate users into single-rate virtual users, followed by the use of FKMA for extraction of virtual users' data sequences associated with the desired user, and recovery of the data sequence of the desired user from estimated virtual users' data sequences. Assuming that all the users' spreading sequences are given a priori, two multirate blind multiuser detection algorithms (with either a single receive antenna or multiple antennas, which also enjoy the merits of superexponential convergence rate and guaranteed convergence of the FKMA, are proposed in the paper, one based on a convolutional MIMO signal model and the other based on an instantaneous MIMO signal model. Some simulation results are then presented to demonstrate their effectiveness and to provide a performance comparison with some existing algorithms.

  2. An Automated, High-Throughput System for GISAXS and GIWAXS Measurements of Thin Films

    Science.gov (United States)

    Schaible, Eric; Jimenez, Jessica; Church, Matthew; Lim, Eunhee; Stewart, Polite; Hexemer, Alexander

    Grazing incidence small-angle X-ray scattering (GISAXS) and grazing incidence wide-angle X-ray scattering (GIWAXS) are important techniques for characterizing thin films. In order to meet rapidly increasing demand, the SAXSWAXS beamline at the Advanced Light Source (beamline 7.3.3) has implemented a fully automated, high-throughput system to conduct SAXS, GISAXS and GIWAXS measurements. An automated robot arm transfers samples from a holding tray to a measurement stage. Intelligent software aligns each sample in turn, and measures each according to user-defined specifications. Users mail in trays of samples on individually barcoded pucks, and can download and view their data remotely. Data will be pipelined to the NERSC supercomputing facility, and will be available to users via a web portal that facilitates highly parallelized analysis.

  3. Link Analysis of High Throughput Spacecraft Communication Systems for Future Science Missions

    Science.gov (United States)

    Simons, Rainee N.

    2015-01-01

    NASA's plan to launch several spacecrafts into low Earth Orbit (LEO) to support science missions in the next ten years and beyond requires down link throughput on the order of several terabits per day. The ability to handle such a large volume of data far exceeds the capabilities of current systems. This paper proposes two solutions, first, a high data rate link between the LEO spacecraft and ground via relay satellites in geostationary orbit (GEO). Second, a high data rate direct to ground link from LEO. Next, the paper presents results from computer simulations carried out for both types of links taking into consideration spacecraft transmitter frequency, EIRP, and waveform; elevation angle dependent path loss through Earths atmosphere, and ground station receiver GT.

  4. A low-cost, portable, high-throughput wireless sensor system for phonocardiography applications.

    Science.gov (United States)

    Sa-Ngasoongsong, Akkarapol; Kunthong, Jakkrit; Sarangan, Venkatesh; Cai, Xinwei; Bukkapatnam, Satish T S

    2012-01-01

    This paper presents the design and testing of a wireless sensor system developed using a Microchip PICDEM developer kit to acquire and monitor human heart sounds for phonocardiography applications. This system can serve as a cost-effective option to the recent developments in wireless phonocardiography sensors that have primarily focused on Bluetooth technology. This wireless sensor system has been designed and developed in-house using off-the-shelf components and open source software for remote and mobile applications. The small form factor (3.75 cm × 5 cm × 1 cm), high throughput (6,000 Hz data streaming rate), and low cost ($13 per unit for a 1,000 unit batch) of this wireless sensor system make it particularly attractive for phonocardiography and other sensing applications. The experimental results of sensor signal analysis using several signal characterization techniques suggest that this wireless sensor system can capture both fundamental heart sounds (S1 and S2), and is also capable of capturing abnormal heart sounds (S3 and S4) and heart murmurs without aliasing. The results of a denoising application using Wavelet Transform show that the undesirable noises of sensor signals in the surrounding environment can be reduced dramatically. The exercising experiment results also show that this proposed wireless PCG system can capture heart sounds over different heart conditions simulated by varying heart rates of six subjects over a range of 60-180 Hz through exercise testing.

  5. A Low-Cost, Portable, High-Throughput Wireless Sensor System for Phonocardiography Applications

    Directory of Open Access Journals (Sweden)

    Akkarapol Sa-ngasoongsong

    2012-08-01

    Full Text Available This paper presents the design and testing of a wireless sensor system developed using a Microchip PICDEM developer kit to acquire and monitor human heart sounds for phonocardiography applications. This system can serve as a cost-effective option to the recent developments in wireless phonocardiography sensors that have primarily focused on Bluetooth technology. This wireless sensor system has been designed and developed in-house using off-the-shelf components and open source software for remote and mobile applications. The small form factor (3.75 cm ´ 5 cm ´ 1 cm, high throughput (6,000 Hz data streaming rate, and low cost ($13 per unit for a 1,000 unit batch of this wireless sensor system make it particularly attractive for phonocardiography and other sensing applications. The experimental results of sensor signal analysis using several signal characterization techniques suggest that this wireless sensor system can capture both fundamental heart sounds (S1 and S2, and is also capable of capturing abnormal heart sounds (S3 and S4 and heart murmurs without aliasing. The results of a denoising application using Wavelet Transform show that the undesirable noises of sensor signals in the surrounding environment can be reduced dramatically. The exercising experiment results also show that this proposed wireless PCG system can capture heart sounds over different heart conditions simulated by varying heart rates of six subjects over a range of 60–180 Hz through exercise testing.

  6. Maximal superintegrability of the generalized Kepler-Coulomb system on N-dimensional curved spaces

    International Nuclear Information System (INIS)

    Ballesteros, Angel; Herranz, Francisco J

    2009-01-01

    The superposition of the Kepler-Coulomb potential on the 3D Euclidean space with three centrifugal terms has recently been shown to be maximally superintegrable (Verrier and Evans 2008 J. Math. Phys. 49 022902) by finding an additional (hidden) integral of motion which is quartic in the momenta. In this paper, we present the generalization of this result to the N-dimensional spherical, hyperbolic and Euclidean spaces by making use of a unified symmetry approach that makes use of the curvature parameter. The resulting Hamiltonian, formed by the (curved) Kepler-Coulomb potential together with N centrifugal terms, is shown to be endowed with 2N - 1 functionally independent integrals of the motion: one of them is quartic and the remaining ones are quadratic. The transition from the proper Kepler-Coulomb potential, with its associated quadratic Laplace-Runge-Lenz N-vector, to the generalized system is fully described. The role of spherical, nonlinear (cubic) and coalgebra symmetries in all these systems is highlighted

  7. Foundations of data-intensive science: Technology and practice for high throughput, widely distributed, data management and analysis systems

    Science.gov (United States)

    Johnston, William; Ernst, M.; Dart, E.; Tierney, B.

    2014-04-01

    Today's large-scale science projects involve world-wide collaborations depend on moving massive amounts of data from an instrument to potentially thousands of computing and storage systems at hundreds of collaborating institutions to accomplish their science. This is true for ATLAS and CMS at the LHC, and it is true for the climate sciences, Belle-II at the KEK collider, genome sciences, the SKA radio telescope, and ITER, the international fusion energy experiment. DOE's Office of Science has been collecting science discipline and instrument requirements for network based data management and analysis for more than a decade. As a result of this certain key issues are seen across essentially all science disciplines that rely on the network for significant data transfer, even if the data quantities are modest compared to projects like the LHC experiments. These issues are what this talk will address; to wit: 1. Optical signal transport advances enabling 100 Gb/s circuits that span the globe on optical fiber with each carrying 100 such channels; 2. Network router and switch requirements to support high-speed international data transfer; 3. Data transport (TCP is still the norm) requirements to support high-speed international data transfer (e.g. error-free transmission); 4. Network monitoring and testing techniques and infrastructure to maintain the required error-free operation of the many R&E networks involved in international collaborations; 5. Operating system evolution to support very high-speed network I/O; 6. New network architectures and services in the LAN (campus) and WAN networks to support data-intensive science; 7. Data movement and management techniques and software that can maximize the throughput on the network connections between distributed data handling systems, and; 8. New approaches to widely distributed workflow systems that can support the data movement and analysis required by the science. All of these areas must be addressed to enable large

  8. Improvement of an automated protein crystal exchange system PAM for high-throughput data collection

    International Nuclear Information System (INIS)

    Hiraki, Masahiko; Yamada, Yusuke; Chavas, Leonard M. G.; Wakatsuki, Soichi; Matsugaki, Naohiro

    2013-01-01

    A special liquid-nitrogen Dewar with double capacity for the sample-exchange robot has been created at AR-NE3A at the Photon Factory, allowing continuous fully automated data collection. In this work, this new system is described and the stability of its calibration is discussed. Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable

  9. Improvement of an automated protein crystal exchange system PAM for high-throughput data collection

    Energy Technology Data Exchange (ETDEWEB)

    Hiraki, Masahiko, E-mail: masahiko.hiraki@kek.jp; Yamada, Yusuke; Chavas, Leonard M. G. [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); Wakatsuki, Soichi [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan); SLAC National Accelerator Laboratory, 2575 Sand Hill Road, MS 69, Menlo Park, CA 94025-7015 (United States); Stanford University, Beckman Center B105, Stanford, CA 94305-5126 (United States); Matsugaki, Naohiro [High Energy Accelerator Research Organization, 1-1 Oho, Tsukuba, Ibaraki 305-0801 (Japan)

    2013-11-01

    A special liquid-nitrogen Dewar with double capacity for the sample-exchange robot has been created at AR-NE3A at the Photon Factory, allowing continuous fully automated data collection. In this work, this new system is described and the stability of its calibration is discussed. Photon Factory Automated Mounting system (PAM) protein crystal exchange systems are available at the following Photon Factory macromolecular beamlines: BL-1A, BL-5A, BL-17A, AR-NW12A and AR-NE3A. The beamline AR-NE3A has been constructed for high-throughput macromolecular crystallography and is dedicated to structure-based drug design. The PAM liquid-nitrogen Dewar can store a maximum of three SSRL cassettes. Therefore, users have to interrupt their experiments and replace the cassettes when using four or more of them during their beam time. As a result of investigation, four or more cassettes were used in AR-NE3A alone. For continuous automated data collection, the size of the liquid-nitrogen Dewar for the AR-NE3A PAM was increased, doubling the capacity. In order to check the calibration with the new Dewar and the cassette stand, calibration experiments were repeatedly performed. Compared with the current system, the parameters of the novel system are shown to be stable.

  10. Entropy maximization

    Indian Academy of Sciences (India)

    Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.

  11. Entropy Maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  12. High-throughput reactor system with individual temperature control for the investigation of monolith catalysts

    Science.gov (United States)

    Dellamorte, Joseph C.; Vijay, Rohit; Snively, Christopher M.; Barteau, Mark A.; Lauterbach, Jochen

    2007-07-01

    A high-throughput parallel reactor system has been designed and constructed to improve the reliability of results from large diameter catalysts such as monoliths. The system, which is expandable, consists of eight quartz reactors, 23.5mm in diameter. The eight reactors were designed with separate K type thermocouples and radiant heaters, allowing for the independent measurement and control of each reactor temperature. This design gives steady state temperature distributions over the eight reactors within 0.5°C of a common setpoint from 50to700°C. Analysis of the effluent from these reactors is performed using rapid-scan Fourier transform infrared (FTIR) spectroscopic imaging. The integration of this technique to the reactor system allows a chemically specific, truly parallel analysis of the reactor effluents with a time resolution of approximately 8s. The capabilities of this system were demonstrated via investigation of catalyst preparation conditions on the direct epoxidation of ethylene, i.e., on the ethylene conversion and the ethylene oxide selectivity. The ethylene, ethylene oxide, and carbon dioxide concentrations were calibrated based on spectra from FTIR imaging using univariate and multivariate chemometric techniques. The results from this analysis showed that the calcination conditions significantly affect the ethylene conversion, with a threefold increase in the conversion when the catalyst was calcined for 3h versus 12h at 400°C.

  13. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting.

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-04

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell's natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  14. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    Science.gov (United States)

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  15. Integrated Controlling System and Unified Database for High Throughput Protein Crystallography Experiments

    International Nuclear Information System (INIS)

    Gaponov, Yu.A.; Igarashi, N.; Hiraki, M.; Sasajima, K.; Matsugaki, N.; Suzuki, M.; Kosuge, T.; Wakatsuki, S.

    2004-01-01

    An integrated controlling system and a unified database for high throughput protein crystallography experiments have been developed. Main features of protein crystallography experiments (purification, crystallization, crystal harvesting, data collection, data processing) were integrated into the software under development. All information necessary to perform protein crystallography experiments is stored (except raw X-ray data that are stored in a central data server) in a MySQL relational database. The database contains four mutually linked hierarchical trees describing protein crystals, data collection of protein crystal and experimental data processing. A database editor was designed and developed. The editor supports basic database functions to view, create, modify and delete user records in the database. Two search engines were realized: direct search of necessary information in the database and object oriented search. The system is based on TCP/IP secure UNIX sockets with four predefined sending and receiving behaviors, which support communications between all connected servers and clients with remote control functions (creating and modifying data for experimental conditions, data acquisition, viewing experimental data, and performing data processing). Two secure login schemes were designed and developed: a direct method (using the developed Linux clients with secure connection) and an indirect method (using the secure SSL connection using secure X11 support from any operating system with X-terminal and SSH support). A part of the system has been implemented on a new MAD beam line, NW12, at the Photon Factory Advanced Ring for general user experiments

  16. Vision-based Nano Robotic System for High-throughput Non-embedded Cell Cutting

    Science.gov (United States)

    Shang, Wanfeng; Lu, Haojian; Wan, Wenfeng; Fukuda, Toshio; Shen, Yajing

    2016-03-01

    Cell cutting is a significant task in biology study, but the highly productive non-embedded cell cutting is still a big challenge for current techniques. This paper proposes a vision-based nano robotic system and then realizes automatic non-embedded cell cutting with this system. First, the nano robotic system is developed and integrated with a nanoknife inside an environmental scanning electron microscopy (ESEM). Then, the positions of the nanoknife and the single cell are recognized, and the distance between them is calculated dynamically based on image processing. To guarantee the positioning accuracy and the working efficiency, we propose a distance-regulated speed adapting strategy, in which the moving speed is adjusted intelligently based on the distance between the nanoknife and the target cell. The results indicate that the automatic non-embedded cutting is able to be achieved within 1-2 mins with low invasion benefiting from the high precise nanorobot system and the sharp edge of nanoknife. This research paves a way for the high-throughput cell cutting at cell’s natural condition, which is expected to make significant impact on the biology studies, especially for the in-situ analysis at cellular and subcellular scale, such as cell interaction investigation, neural signal transduction and low invasive cell surgery.

  17. High-Throughput Lipolysis in 96-Well Plates for Rapid Screening of Lipid-Based Drug Delivery Systems

    DEFF Research Database (Denmark)

    Mosgaard, Mette D; Sassene, Philip J; Mu, Huiling

    2017-01-01

    The high-throughput in vitro intestinal lipolysis model (HTP) applicable for rapid and low-scale screening of lipid-based drug delivery systems (LbDDSs) was optimized and adjusted as to be conducted in 96-well plates (HTP-96). Three different LbDDSs (I-III) loaded with danazol or cinnarizine were...

  18. Two-Phase Microfluidic Systems for High Throughput Quantification of Agglutination Assays

    KAUST Repository

    Castro, David

    2018-04-01

    Lab-on-Chip, the miniaturization of the chemical and analytical lab, is an endeavor that seems to come out of science fiction yet is slowly becoming a reality. It is a multidisciplinary field that combines different areas of science and engineering. Within these areas, microfluidics is a specialized field that deals with the behavior, control and manipulation of small volumes of fluids. Agglutination assays are rapid, single-step, low-cost immunoassays that use microspheres to detect a wide variety molecules and pathogens by using a specific antigen-antibody interaction. Agglutination assays are particularly suitable for the miniaturization and automation that two-phase microfluidics can offer, a combination that can help tackle the ever pressing need of high-throughput screening for blood banks, epidemiology, food banks diagnosis of infectious diseases. In this thesis, we present a two-phase microfluidic system capable of incubating and quantifying agglutination assays. The microfluidic channel is a simple fabrication solution, using laboratory tubing. These assays are incubated by highly efficient passive mixing with a sample-to-answer time of 2.5 min, a 5-10 fold improvement over traditional agglutination assays. It has a user-friendly interface that that does not require droplet generators, in which a pipette is used to continuously insert assays on-demand, with no down-time in between experiments at 360 assays/h. System parameters are explored, using the streptavidin-biotin interaction as a model assay, with a minimum detection limit of 50 ng/mL using optical image analysis. We compare optical image analysis and light scattering as quantification methods, and demonstrate the first light scattering quantification of agglutination assays in a two-phase ow format. The application can be potentially applied to other biomarkers, which we demonstrate using C-reactive protein (CRP) assays. Using our system, we can take a commercially available CRP qualitative slide

  19. Severity scoring in the critically ill: part 2: maximizing value from outcome prediction scoring systems.

    Science.gov (United States)

    Breslow, Michael J; Badawi, Omar

    2012-02-01

    Part 2 of this review of ICU scoring systems examines how scoring system data should be used to assess ICU performance. There often are two different consumers of these data: lCU clinicians and quality leaders who seek to identify opportunities to improve quality of care and operational efficiency, and regulators, payors, and consumers who want to compare performance across facilities. The former need to know how to garner maximal insight into their care practices; this includes understanding how length of stay (LOS) relates to quality, analyzing the behavior of different subpopulations, and following trends over time. Segregating patients into low-, medium-, and high-risk populations is especially helpful, because care issues and outcomes may differ across this severity continuum. Also, LOS behaves paradoxically in high-risk patients (survivors often have longer LOS than nonsurvivors); failure to examine this subgroup separately can penalize ICUs with superior outcomes. Consumers of benchmarking data often focus on a single score, the standardized mortality ratio (SMR). However, simple SMRs are disproportionately affected by outcomes in high-risk patients, and differences in population composition, even when performance is otherwise identical, can result in different SMRs. Future benchmarking must incorporate strategies to adjust for differences in population composition and report performance separately for low-, medium- and high-acuity patients. Moreover, because many ICUs lack the resources to care for high-acuity patients (predicted mortality >50%), decisions about where patients should receive care must consider both ICU performance scores and their capacity to care for different types of patients.

  20. Design and implementation of a low-cost maximization power conversion system for brushless DC generator

    OpenAIRE

    Abolfazl Halvaei Niasar; AmirHossein Sabbaghean

    2017-01-01

    This paper presents a simple and low-cost method to capture maximum power throughput of permanent magnet brushless DC (BLDC) generator. Conventional methods of rectification are based on passive converters, and because the current waveform cannot be controlled as ideal waveform, a highly distorted current is drawn from brushless generator. It leads to lower power factor and reduces the efficiency and power per ampere capability. So, in this study an active six-witch power converter is employe...

  1. SprayQc: a real-time LC-MS/MS quality monitoring system to maximize uptime using off the shelf components.

    Science.gov (United States)

    Scheltema, Richard A; Mann, Matthias

    2012-06-01

    With the advent of high-throughput mass spectrometry (MS)-based proteomics, the magnitude and complexity of the performed experiments has increased dramatically. Likewise, investments in chromatographic and MS instrumentation are a large proportion of the budget of proteomics laboratories. Guarding measurement quality and maximizing uptime of the LC-MS/MS systems therefore requires constant care despite automated workflows. We describe a real-time surveillance system, called SprayQc, that continuously monitors the status of the peripheral equipment to ensure that operational parameters are within an acceptable range. SprayQc is composed of multiple plug-in software components that use computer vision to analyze electrospray conditions, monitor the chromatographic device for stable backpressure, interact with a column oven to control pressure by temperature, and ensure that the mass spectrometer is still acquiring data. Action is taken when a failure condition has been detected, such as stopping the column oven and the LC flow, as well as automatically notifying the appropriate operator. Additionally, all defined metrics can be recorded synchronized on retention time with the MS acquisition file, allowing for later inspection and providing valuable information for optimization. SprayQc has been extensively tested in our laboratory, supports third-party plug-in development, and is freely available for download from http://sourceforge.org/projects/sprayqc .

  2. Efficient Architectures for Low Latency and High Throughput Trading Systems on the JVM

    Directory of Open Access Journals (Sweden)

    Alexandru LIXANDRU

    2013-01-01

    Full Text Available The motivation for our research starts from the common belief that the Java platform is not suitable for implementing ultra-high performance applications. Java is one of the most widely used software development platform in the world, and it provides the means for rapid development of robust and complex applications that are easy to extend, ensuring short time-to-market of initial deliveries and throughout the lifetime of the system. The Java runtime environment, and especially the Java Virtual Machine, on top of which applications are executed, is the principal source of concerns in regards to its suitability in the electronic trading environment, mainly because of its implicit memory management. In this paper, we intend to identify some of the most common measures that can be taken, both at the Java runtime environment level and at the application architecture level, which can help Java applications achieve ultra-high performance. We also propose two efficient architectures for exchange trading systems that allow for ultra-low latencies and high throughput.

  3. A high throughput system for the preparation of single stranded templates grown in microculture.

    Science.gov (United States)

    Kolner, D E; Guilfoyle, R A; Smith, L M

    1994-01-01

    A high throughput system for the preparation of single stranded M13 sequencing templates is described. Supernatants from clones grown in 48-well plates are treated with a chaotropic agent to dissociate the phage coat protein. Using a semi-automated cell harvester, the free nucleic acid is bound to a glass fiber filter in the presence of chaotrope and then washed with ethanol by aspiration. Individual glass fiber discs are punched out on the cell harvester and dried briefly. The DNA samples are then eluted in water by centrifugation. The processing time from 96 microcultures to sequence quality templates is approximately 1 hr. Assuming the ability to sequence 400 bases per clone, a 0.5 megabase per day genome sequencing facility will require 6250 purified templates a week. Toward accomplishing this goal we have developed a procedure which is a modification of a method that uses a chaotropic agent and glass fiber filter (Kristensen et al., 1987). By exploiting the ability of a cell harvester to uniformly aspirate and wash 96 samples, a rapid system for high quality template preparation has been developed. Other semi-automated systems for template preparation have been developed using commercially available robotic workstations like the Biomek (Mardis and Roe, 1989). Although minimal human intervention is required, processing time is at least twice as long. Custom systems based on paramagnetic beads (Hawkins et al., 1992) produce DNA in insufficient quantity for direct sequencing and therefore require cycle sequencing. These systems require custom programing, have a fairly high initial cost and have not proven to be as fast as the method reported here.

  4. High throughput web inspection system using time-stretch real-time imaging

    Science.gov (United States)

    Kim, Chanju

    Photonic time-stretch is a novel technology that enables capturing of fast, rare and non-repetitive events. Therefore, it operates in real-time with ability to record over long period of time while having fine temporal resolution. The powerful property of photonic time-stretch has already been employed in various fields of application such as analog-to-digital conversion, spectroscopy, laser scanner and microscopy. Further expanding the scope, we fully exploit the time-stretch technology to demonstrate a high throughput web inspection system. Web inspection, namely surface inspection is a nondestructive evaluation method which is crucial for semiconductor wafer and thin film production. We successfully report a dark-field web inspection system with line scan speed of 90.9 MHz which is up to 1000 times faster than conventional inspection instruments. The manufacturing of high quality semiconductor wafer and thin film may directly benefit from this technology as it can easily locate defects with area of less than 10 microm x 10 microm where it allows maximum web flow speed of 1.8 km/s. The thesis provides an overview of our web inspection technique, followed by description of the photonic time-stretch technique which is the keystone in our system. A detailed explanation of each component is covered to provide quantitative understanding of the system. Finally, imaging results from a hard-disk sample and flexible films are presented along with performance analysis of the system. This project was the first application of time-stretch to industrial inspection, and was conducted under financial support and with close involvement by Hitachi, Ltd.

  5. A high throughput mass spectrometry screening analysis based on two-dimensional carbon microfiber fractionation system.

    Science.gov (United States)

    Ma, Biao; Zou, Yilin; Xie, Xuan; Zhao, Jinhua; Piao, Xiangfan; Piao, Jingyi; Yao, Zhongping; Quinto, Maurizio; Wang, Gang; Li, Donghao

    2017-06-09

    A novel high-throughput, solvent saving and versatile integrated two-dimensional microscale carbon fiber/active carbon fiber system (2DμCFs) that allows a simply and rapid separation of compounds in low-polar, medium-polar and high-polar fractions, has been coupled with ambient ionization-mass spectrometry (ESI-Q-TOF-MS and ESI-QqQ-MS) for screening and quantitative analyses of real samples. 2DμCFs led to a substantial interference reduction and minimization of ionization suppression effects, thus increasing the sensitivity and the screening capabilities of the subsequent MS analysis. The method has been applied to the analysis of Schisandra Chinensis extracts, obtaining with a single injection a simultaneous determination of 33 compounds presenting different polarities, such as organic acids, lignans, and flavonoids in less than 7min, at low pressures and using small solvent amounts. The method was also validated using 10 model compounds, giving limit of detections (LODs) ranging from 0.3 to 30ngmL -1 , satisfactory recoveries (from 75.8 to 93.2%) and reproducibilities (relative standard deviations, RSDs, from 1.40 to 8.06%). Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Screening for Antifibrotic Compounds Using High Throughput System Based on Fluorescence Polarization

    Directory of Open Access Journals (Sweden)

    Branko Stefanovic

    2014-04-01

    Full Text Available Fibroproliferative diseases are one of the leading causes of death worldwide. They are characterized by reactive fibrosis caused by uncontrolled synthesis of type I collagen. There is no cure for fibrosis and development of therapeutics that can inhibit collagen synthesis is urgently needed. Collagen α1(I mRNA and α2(I mRNA encode for type I collagen and they have a unique 5' stem-loop structure in their 5' untranslated regions (5'SL. Collagen 5'SL binds protein LARP6 with high affinity and specificity. The interaction between LARP6 and the 5'SL is critical for biosynthesis of type I collagen and development of fibrosis in vivo. Therefore, this interaction represents is an ideal target to develop antifibrotic drugs. A high throughput system to screen for chemical compounds that can dissociate LARP6 from 5'SL has been developed. It is based on fluorescence polarization and can be adapted to screen for inhibitors of other protein-RNA interactions. Screening of 50,000 chemical compounds yielded a lead compound that can inhibit type I collagen synthesis at nanomolar concentrations. The development, characteristics, and critical appraisal of this assay are presented.

  7. Screensaver: an open source lab information management system (LIMS for high throughput screening facilities

    Directory of Open Access Journals (Sweden)

    Nale Jennifer

    2010-05-01

    Full Text Available Abstract Background Shared-usage high throughput screening (HTS facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. Results We have developed Screensaver, a free, open source, web-based lab information management system (LIMS, to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. Conclusions The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.

  8. Screensaver: an open source lab information management system (LIMS) for high throughput screening facilities.

    Science.gov (United States)

    Tolopko, Andrew N; Sullivan, John P; Erickson, Sean D; Wrobel, David; Chiang, Su L; Rudnicki, Katrina; Rudnicki, Stewart; Nale, Jennifer; Selfors, Laura M; Greenhouse, Dara; Muhlich, Jeremy L; Shamu, Caroline E

    2010-05-18

    Shared-usage high throughput screening (HTS) facilities are becoming more common in academe as large-scale small molecule and genome-scale RNAi screening strategies are adopted for basic research purposes. These shared facilities require a unique informatics infrastructure that must not only provide access to and analysis of screening data, but must also manage the administrative and technical challenges associated with conducting numerous, interleaved screening efforts run by multiple independent research groups. We have developed Screensaver, a free, open source, web-based lab information management system (LIMS), to address the informatics needs of our small molecule and RNAi screening facility. Screensaver supports the storage and comparison of screening data sets, as well as the management of information about screens, screeners, libraries, and laboratory work requests. To our knowledge, Screensaver is one of the first applications to support the storage and analysis of data from both genome-scale RNAi screening projects and small molecule screening projects. The informatics and administrative needs of an HTS facility may be best managed by a single, integrated, web-accessible application such as Screensaver. Screensaver has proven useful in meeting the requirements of the ICCB-Longwood/NSRB Screening Facility at Harvard Medical School, and has provided similar benefits to other HTS facilities.

  9. Increasing plant density in eastern United States broccoli production systems to maximize marketable head yields

    Science.gov (United States)

    Increased demand for fresh market broccoli (Brassica oleracea L. var. italica) has led to increased production along the eastern seaboard of the United States. Maximizing broccoli yields is a primary concern for quickly expanding eastern commercial markets. Thus, a plant density study was carried ...

  10. Toward a generalized and high-throughput enzyme screening system based on artificial genetic circuits.

    Science.gov (United States)

    Choi, Su-Lim; Rha, Eugene; Lee, Sang Jun; Kim, Haseong; Kwon, Kilkoang; Jeong, Young-Su; Rhee, Young Ha; Song, Jae Jun; Kim, Hak-Sung; Lee, Seung-Goo

    2014-03-21

    Large-scale screening of enzyme libraries is essential for the development of cost-effective biological processes, which will be indispensable for the production of sustainable biobased chemicals. Here, we introduce a genetic circuit termed the Genetic Enzyme Screening System that is highly useful for high-throughput enzyme screening from diverse microbial metagenomes. The circuit consists of two AND logics. The first AND logic, the two inputs of which are the target enzyme and its substrate, is responsible for the accumulation of a phenol compound in cell. Then, the phenol compound and its inducible transcription factor, whose activation turns on the expression of a reporter gene, interact in the other logic gate. We confirmed that an individual cell harboring this genetic circuit can present approximately a 100-fold higher cellular fluorescence than the negative control and can be easily quantified by flow cytometry depending on the amounts of phenolic derivatives. The high sensitivity of the genetic circuit enables the rapid discovery of novel enzymes from metagenomic libraries, even for genes that show marginal activities in a host system. The crucial feature of this approach is that this single system can be used to screen a variety of enzymes that produce a phenol compound from respective synthetic phenyl-substrates, including cellulase, lipase, alkaline phosphatase, tyrosine phenol-lyase, and methyl parathion hydrolase. Consequently, the highly sensitive and quantitative nature of this genetic circuit along with flow cytometry techniques could provide a widely applicable toolkit for discovering and engineering novel enzymes at a single cell level.

  11. GiA Roots: software for the high throughput analysis of plant root system architecture

    Science.gov (United States)

    2012-01-01

    Background Characterizing root system architecture (RSA) is essential to understanding the development and function of vascular plants. Identifying RSA-associated genes also represents an underexplored opportunity for crop improvement. Software tools are needed to accelerate the pace at which quantitative traits of RSA are estimated from images of root networks. Results We have developed GiA Roots (General Image Analysis of Roots), a semi-automated software tool designed specifically for the high-throughput analysis of root system images. GiA Roots includes user-assisted algorithms to distinguish root from background and a fully automated pipeline that extracts dozens of root system phenotypes. Quantitative information on each phenotype, along with intermediate steps for full reproducibility, is returned to the end-user for downstream analysis. GiA Roots has a GUI front end and a command-line interface for interweaving the software into large-scale workflows. GiA Roots can also be extended to estimate novel phenotypes specified by the end-user. Conclusions We demonstrate the use of GiA Roots on a set of 2393 images of rice roots representing 12 genotypes from the species Oryza sativa. We validate trait measurements against prior analyses of this image set that demonstrated that RSA traits are likely heritable and associated with genotypic differences. Moreover, we demonstrate that GiA Roots is extensible and an end-user can add functionality so that GiA Roots can estimate novel RSA traits. In summary, we show that the software can function as an efficient tool as part of a workflow to move from large numbers of root images to downstream analysis. PMID:22834569

  12. A novel hanging spherical drop system for the generation of cellular spheroids and high throughput combinatorial drug screening.

    Science.gov (United States)

    Neto, A I; Correia, C R; Oliveira, M B; Rial-Hermida, M I; Alvarez-Lorenzo, C; Reis, R L; Mano, J F

    2015-04-01

    We propose a novel hanging spherical drop system for anchoring arrays of droplets of cell suspension based on the use of biomimetic superhydrophobic flat substrates, with controlled positional adhesion and minimum contact with a solid substrate. By facing down the platform, it was possible to generate independent spheroid bodies in a high throughput manner, in order to mimic in vivo tumour models on the lab-on-chip scale. To validate this system for drug screening purposes, the toxicity of the anti-cancer drug doxorubicin in cell spheroids was tested and compared to cells in 2D culture. The advantages presented by this platform, such as feasibility of the system and the ability to control the size uniformity of the spheroid, emphasize its potential to be used as a new low cost toolbox for high-throughput drug screening and in cell or tissue engineering.

  13. Power maximization method for land-transportable fully passive lead–bismuth cooled small modular reactor systems

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jaehyun, E-mail: chojh@kaeri.re.kr [Korea Atomic Energy Research Institute, 1405 Daedeok-daero, Yuseong-gu, Daejeon 305-353 (Korea, Republic of); Shin, Yong-Hoon; Hwang, Il Soon [Seoul National University, Sillim-dong, Gwanak-gu, Seoul 151-742 (Korea, Republic of)

    2015-08-15

    Highlights: • The power maximization method for LBE natural circulation cooled SMRs was developed. • The two powers in view of neutronics and thermal-hydraulics were considered. • The limitations for designing of LBE natural circulation cooled SMRs were summarized. • The necessary conditions for safety shutdown in accidents were developed. • The maximized power in the case study is 206 MW thermal. - Abstract: Although current pressurized water reactors (PWRs) have significantly contributed to global energy supply, PWR technology has not been considered a trustworthy energy solution owing to its problems of spent nuclear fuels (SNFs), nuclear safety, and nuclear economy. In order to overcome these problems, a lead–bismuth eutectic (LBE) fully passive cooling small modular reactor (SMR) system is suggested. This technology can not only provide the solution for the problems of SNFs through the transmutation feature of the LBE coolant, but also strengthen safety and economy through the concept of natural circulation cooling SMRs. It is necessary to maximize the advantages, namely safety and economy, of this type of nuclear power plants for broader applications in the future. Accordingly, the objective of this study is to maximize the reactor core power while satisfying the limitations of shipping size, materials endurance, and criticality of a long-burning core as well as safety under beyond design basis events. To achieve these objectives, the design limitations of natural circulating LBE-cooling SMRs are derived. Then, the power maximization method is developed based on obtaining the design limitations. The results of this study are expected to contribute to the effectiveness of the reactor design stage by providing insights to designers, as well as by formulating methods for the power maximization of other types of SMRs.

  14. Nonlinear model dynamics for closed-system, constrained, maximal-entropy-generation relaxation by energy redistribution

    International Nuclear Information System (INIS)

    Beretta, Gian Paolo

    2006-01-01

    We discuss a nonlinear model for relaxation by energy redistribution within an isolated, closed system composed of noninteracting identical particles with energy levels e i with i=1,2,...,N. The time-dependent occupation probabilities p i (t) are assumed to obey the nonlinear rate equations τ dp i /dt=-p i ln p i -α(t)p i -β(t)e i p i where α(t) and β(t) are functionals of the p i (t)'s that maintain invariant the mean energy E=Σ i=1 N e i p i (t) and the normalization condition 1=Σ i=1 N p i (t). The entropy S(t)=-k B Σ i=1 N p i (t)ln p i (t) is a nondecreasing function of time until the initially nonzero occupation probabilities reach a Boltzmann-like canonical distribution over the occupied energy eigenstates. Initially zero occupation probabilities, instead, remain zero at all times. The solutions p i (t) of the rate equations are unique and well defined for arbitrary initial conditions p i (0) and for all times. The existence and uniqueness both forward and backward in time allows the reconstruction of the ancestral or primordial lowest entropy state. By casting the rate equations in terms not of the p i 's but of their positive square roots √(p i ), they unfold from the assumption that time evolution is at all times along the local direction of steepest entropy ascent or, equivalently, of maximal entropy generation. These rate equations have the same mathematical structure and basic features as the nonlinear dynamical equation proposed in a series of papers ending with G. P. Beretta, Found. Phys. 17, 365 (1987) and recently rediscovered by S. Gheorghiu-Svirschevski [Phys. Rev. A 63, 022105 (2001);63, 054102 (2001)]. Numerical results illustrate the features of the dynamics and the differences from the rate equations recently considered for the same problem by M. Lemanska and Z. Jaeger [Physica D 170, 72 (2002)]. We also interpret the functionals k B α(t) and k B β(t) as nonequilibrium generalizations of the thermodynamic-equilibrium Massieu

  15. Development of droplets‐based microfluidic systems for single­‐cell high‐throughput screening

    DEFF Research Database (Denmark)

    Chen, Jun; Jensen, Thomas Glasdam; Godina, Alexei

    2014-01-01

    High-throughput screening (HTS) plays an important role in the development of microbial cell factories. One of the most popular approaches is to use microplates combined with the application of robotics, liquid handling and sophisticated detection methods. However, these workstations require large...... investment, and a logarithmic increase to screen large combinatorial libraries over the decades also makes it gradually out of depth. Here, we are trying to develop a feasible high‐throughput system that uses microfluidics to compartmentalize a single cell for propagation and analysis in monodisperse...... picoliter aqueous droplets surround by an immiscible fluorinated oil phase. Our aim is to use this system to facilitate the screening process for both the biotechnology and food industry....

  16. Design and implementation of a low-cost maximization power conversion system for brushless DC generator

    Directory of Open Access Journals (Sweden)

    Abolfazl Halvaei Niasar

    2017-12-01

    Full Text Available This paper presents a simple and low-cost method to capture maximum power throughput of permanent magnet brushless DC (BLDC generator. Conventional methods of rectification are based on passive converters, and because the current waveform cannot be controlled as ideal waveform, a highly distorted current is drawn from brushless generator. It leads to lower power factor and reduces the efficiency and power per ampere capability. So, in this study an active six-witch power converter is employed and based on the phase back-EMF voltage, an optimum current waveform is generated. The phase currents are controlled inphase to phase voltages and their magnitudes are adjusted to regulate the DC-link voltage. Proposed control theory is verified by simulations for BLDC generator and permanent magnet synchronous generator (PMSG. Moreover, some experimental results are given to demonstrate the theoretical and simulation results.

  17. Information-guided transmission in decode-and-forward relaying systems: Spatial exploitation and throughput enhancement

    KAUST Repository

    Yang, Yuli; Aissa, Sonia

    2011-01-01

    In addressing the issue of achieving high throughput in half-duplex relay channels, we exploit a concept of information-guided transmission for the network consisting of a source node, a destination node, and multiple half-duplex relay nodes

  18. Application of unmanned aerial systems for high throughput phenotyping of large wheat breeding nurseries.

    Science.gov (United States)

    Haghighattalab, Atena; González Pérez, Lorena; Mondal, Suchismita; Singh, Daljit; Schinstock, Dale; Rutkoski, Jessica; Ortiz-Monasterio, Ivan; Singh, Ravi Prakash; Goodin, Douglas; Poland, Jesse

    2016-01-01

    Low cost unmanned aerial systems (UAS) have great potential for rapid proximal measurements of plants in agriculture. In the context of plant breeding and genetics, current approaches for phenotyping a large number of breeding lines under field conditions require substantial investments in time, cost, and labor. For field-based high-throughput phenotyping (HTP), UAS platforms can provide high-resolution measurements for small plot research, while enabling the rapid assessment of tens-of-thousands of field plots. The objective of this study was to complete a baseline assessment of the utility of UAS in assessment field trials as commonly implemented in wheat breeding programs. We developed a semi-automated image-processing pipeline to extract plot level data from UAS imagery. The image dataset was processed using a photogrammetric pipeline based on image orientation and radiometric calibration to produce orthomosaic images. We also examined the relationships between vegetation indices (VIs) extracted from high spatial resolution multispectral imagery collected with two different UAS systems (eBee Ag carrying MultiSpec 4C camera, and IRIS+ quadcopter carrying modified NIR Canon S100) and ground truth spectral data from hand-held spectroradiometer. We found good correlation between the VIs obtained from UAS platforms and ground-truth measurements and observed high broad-sense heritability for VIs. We determined radiometric calibration methods developed for satellite imagery significantly improved the precision of VIs from the UAS. We observed VIs extracted from calibrated images of Canon S100 had a significantly higher correlation to the spectroradiometer (r = 0.76) than VIs from the MultiSpec 4C camera (r = 0.64). Their correlation to spectroradiometer readings was as high as or higher than repeated measurements with the spectroradiometer per se. The approaches described here for UAS imaging and extraction of proximal sensing data enable collection of HTP

  19. On the throughput of cognitive radio MIMO systems assisted with UAV relays

    KAUST Repository

    Sboui, Lokman

    2017-07-20

    We analyze the achievable rates of a cognitive radio MIMO system assisted by an unmanned aerial vehicle (UAV) relay. The primary user (PU) and the secondary user (SU) aim to communicate to the closest primary base station (BS) via a multi-access channel through the same UAV relay. The SU message is then forwarded from the primary BS to the secondary network with a certain incentive reward as a part of the cooperation protocol between both networks. We propose a special linear precoding scheme to enable the SU to exploit the PU free eigenmodes. We, also, present the expression of the power maximizing both primary and secondary rates under power budget, relay power, and interference constraints. In the numerical results, we evaluate the PU and SU rates of proposed scheme with respect to various problem parameters. We also highlight the effect of the UAV altitude on the SU and PU rates. Finally, we show that the relay matrix variation affects both rates that reach their peaks at different values of the matrix.

  20. Lessons we learned from high-throughput and top-down systems biology analyses about glioma stem cells.

    Science.gov (United States)

    Mock, Andreas; Chiblak, Sara; Herold-Mende, Christel

    2014-01-01

    A growing body of evidence suggests that glioma stem cells (GSCs) account for tumor initiation, therapy resistance, and the subsequent regrowth of gliomas. Thus, continuous efforts have been undertaken to further characterize this subpopulation of less differentiated tumor cells. Although we are able to enrich GSCs, we still lack a comprehensive understanding of GSC phenotypes and behavior. The advent of high-throughput technologies raised hope that incorporation of these newly developed platforms would help to tackle such questions. Since then a couple of comparative genome-, transcriptome- and proteome-wide studies on GSCs have been conducted giving new insights in GSC biology. However, lessons had to be learned in designing high-throughput experiments and some of the resulting conclusions fell short of expectations because they were performed on only a few GSC lines or at one molecular level instead of an integrative poly-omics approach. Despite these shortcomings, our knowledge of GSC biology has markedly expanded due to a number of survival-associated biomarkers as well as glioma-relevant signaling pathways and therapeutic targets being identified. In this article we review recent findings obtained by comparative high-throughput analyses of GSCs. We further summarize fundamental concepts of systems biology as well as its applications for glioma stem cell research.

  1. Efficient Wideband Spectrum Sensing with Maximal Spectral Efficiency for LEO Mobile Satellite Systems

    Directory of Open Access Journals (Sweden)

    Feilong Li

    2017-01-01

    Full Text Available The usable satellite spectrum is becoming scarce due to static spectrum allocation policies. Cognitive radio approaches have already demonstrated their potential towards spectral efficiency for providing more spectrum access opportunities to secondary user (SU with sufficient protection to licensed primary user (PU. Hence, recent scientific literature has been focused on the tradeoff between spectrum reuse and PU protection within narrowband spectrum sensing (SS in terrestrial wireless sensing networks. However, those narrowband SS techniques investigated in the context of terrestrial CR may not be applicable for detecting wideband satellite signals. In this paper, we mainly investigate the problem of joint designing sensing time and hard fusion scheme to maximize SU spectral efficiency in the scenario of low earth orbit (LEO mobile satellite services based on wideband spectrum sensing. Compressed detection model is established to prove that there indeed exists one optimal sensing time achieving maximal spectral efficiency. Moreover, we propose novel wideband cooperative spectrum sensing (CSS framework where each SU reporting duration can be utilized for its following SU sensing. The sensing performance benefits from the novel CSS framework because the equivalent sensing time is extended by making full use of reporting slot. Furthermore, in respect of time-varying channel, the spatiotemporal CSS (ST-CSS is presented to attain space and time diversity gain simultaneously under hard decision fusion rule. Computer simulations show that the optimal sensing settings algorithm of joint optimization of sensing time, hard fusion rule and scheduling strategy achieves significant improvement in spectral efficiency. Additionally, the novel ST-CSS scheme performs much higher spectral efficiency than that of general CSS framework.

  2. Throughput rate study

    International Nuclear Information System (INIS)

    Ford, L.; Bailey, W.; Gottlieb, P.; Emami, F.; Fleming, M.; Robertson, D.

    1993-01-01

    The Civilian Radioactive Waste Management System (CRWMS) Management and Operating (M ampersand O) Contractor, has completed a study to analyze system wide impacts of operating the CRWMS at varying throughput rates, including the 3000 MTU/year rate which has been assumed in the past. Impacts of throughput rate on all phases of the CRWMS operations (acceptance, transportation, storage and disposal) were evaluated. The results of the study indicate that a range from 3000 to 5000 MTU/year is preferred, based on system cost per MTU of SNF emplaced and logistics constraints

  3. A high-throughput screening system for barley/powdery mildew interactions based on automated analysis of light micrographs.

    Science.gov (United States)

    Ihlow, Alexander; Schweizer, Patrick; Seiffert, Udo

    2008-01-23

    To find candidate genes that potentially influence the susceptibility or resistance of crop plants to powdery mildew fungi, an assay system based on transient-induced gene silencing (TIGS) as well as transient over-expression in single epidermal cells of barley has been developed. However, this system relies on quantitative microscopic analysis of the barley/powdery mildew interaction and will only become a high-throughput tool of phenomics upon automation of the most time-consuming steps. We have developed a high-throughput screening system based on a motorized microscope which evaluates the specimens fully automatically. A large-scale double-blind verification of the system showed an excellent agreement of manual and automated analysis and proved the system to work dependably. Furthermore, in a series of bombardment experiments an RNAi construct targeting the Mlo gene was included, which is expected to phenocopy resistance mediated by recessive loss-of-function alleles such as mlo5. In most cases, the automated analysis system recorded a shift towards resistance upon RNAi of Mlo, thus providing proof of concept for its usefulness in detecting gene-target effects. Besides saving labor and enabling a screening of thousands of candidate genes, this system offers continuous operation of expensive laboratory equipment and provides a less subjective analysis as well as a complete and enduring documentation of the experimental raw data in terms of digital images. In general, it proves the concept of enabling available microscope hardware to handle challenging screening tasks fully automatically.

  4. Fully Automated Electro Membrane Extraction Autosampler for LC-MS Systems Allowing Soft Extractions for High-Throughput Applications

    DEFF Research Database (Denmark)

    Fuchs, David; Pedersen-Bjergaard, Stig; Jensen, Henrik

    2016-01-01

    was optimized for soft extraction of analytes and high sample throughput. Further, it was demonstrated that by flushing the EME-syringe with acidic wash buffer and reverting the applied electric potential, carry-over between samples can be reduced to below 1%. Performance of the system was characterized (RSD......, a complete analytical workflow of purification, separation, and analysis of sample could be achieved within only 5.5 min. With the developed system large sequences of samples could be analyzed in a completely automated manner. This high degree of automation makes the developed EME-autosampler a powerful tool...

  5. Toward a Low-Cost System for High-Throughput Image-Based Phenotyping of Root System Architecture

    Science.gov (United States)

    Davis, T. W.; Schneider, D. J.; Cheng, H.; Shaw, N.; Kochian, L. V.; Shaff, J. E.

    2015-12-01

    Root system architecture is being studied more closely for improved nutrient acquisition, stress tolerance and carbon sequestration by relating the genetic material that corresponds to preferential physical features. This information can help direct plant breeders in addressing the growing concerns regarding the global demand on crops and fossil fuels. To help support this incentive comes a need to make high-throughput image-based phenotyping of plant roots, at the individual plant scale, simpler and more affordable. Our goal is to create an affordable and portable product for simple image collection, processing and management that will extend root phenotyping to institutions with limited funding (e.g., in developing countries). Thus, a new integrated system has been developed using the Raspberry Pi single-board computer. Similar to other 3D-based imaging platforms, the system utilizes a stationary camera to photograph a rotating crop root system (e.g., rice, maize or sorghum) that is suspended either in a gel or on a mesh (for hydroponics). In contrast, the new design takes advantage of powerful open-source hardware and software to reduce the system costs, simplify the imaging process, and manage the large datasets produced by the high-resolution photographs. A newly designed graphical user interface (GUI) unifies the system controls (e.g., adjusting camera and motor settings and orchestrating the motor motion with image capture), making it easier to accommodate a variety of experiments. During each imaging session, integral metadata necessary for reproducing experiment results are collected (e.g., plant type and age, growing conditions and treatments, camera settings) using hierarchical data format files. These metadata are searchable within the GUI and can be selected and extracted for further analysis. The GUI also supports an image previewer that performs limited image processing (e.g., thresholding and cropping). Root skeletonization, 3D reconstruction and

  6. Maximal Ratio Combining Using Channel Estimation in Chaos Based Pilot-Added DS-CDMA System with Antenna Diversity

    Directory of Open Access Journals (Sweden)

    Meher Krishna Patel

    2017-01-01

    Full Text Available This paper presents an adaptive multiuser transceiver scheme for DS-CDMA systems in which pilot symbols are added to users’ data to estimate complex channel fading coefficients. The performance of receiver antenna diversity with maximal ratio combining (MRC technique is analyzed for imperfect channel estimation in flat fading environments. The complex fading coefficients are estimated using least mean square (LMS algorithm and these coefficients are utilized by the maximal ratio combiner for generating the decision variable. Probability of error in closed form is derived. Further, the effect of pilot signal power on bit error rate (BER and BER performance of multiplexed pilot and data signal transmission scenario are investigated. We have compared the performance of added and multiplexed pilot-data systems and concluded the advantages of both systems. The proposed CDMA technique uses the chaotic sequence as spreading sequence. Assuming proper synchronization, the computer simulation results demonstrate the better bit error rate performance in the presence of channel estimator in the chaotic based CDMA system and the receiver antenna diversity technique further improves the performance of the proposed system. Also, no channel estimator is required if there is no phase distortion to the transmitted signal.

  7. A Customizable Flow Injection System for Automated, High Throughput, and Time Sensitive Ion Mobility Spectrometry and Mass Spectrometry Measurements.

    Science.gov (United States)

    Orton, Daniel J; Tfaily, Malak M; Moore, Ronald J; LaMarche, Brian L; Zheng, Xueyun; Fillmore, Thomas L; Chu, Rosalie K; Weitz, Karl K; Monroe, Matthew E; Kelly, Ryan T; Smith, Richard D; Baker, Erin S

    2018-01-02

    To better understand disease conditions and environmental perturbations, multiomic studies combining proteomic, lipidomic, and metabolomic analyses are vastly increasing in popularity. In a multiomic study, a single sample is typically extracted in multiple ways, and various analyses are performed using different instruments, most often based upon mass spectrometry (MS). Thus, one sample becomes many measurements, making high throughput and reproducible evaluations a necessity. One way to address the numerous samples and varying instrumental conditions is to utilize a flow injection analysis (FIA) system for rapid sample injections. While some FIA systems have been created to address these challenges, many have limitations such as costly consumables, low pressure capabilities, limited pressure monitoring, and fixed flow rates. To address these limitations, we created an automated, customizable FIA system capable of operating at a range of flow rates (∼50 nL/min to 500 μL/min) to accommodate both low- and high-flow MS ionization sources. This system also functions at varying analytical throughputs from 24 to 1200 samples per day to enable different MS analysis approaches. Applications ranging from native protein analyses to molecular library construction were performed using the FIA system, and results showed a highly robust and reproducible platform capable of providing consistent performance over many days without carryover, as long as washing buffers specific to each molecular analysis were utilized.

  8. A Customizable Flow Injection System for Automated, High Throughput, and Time Sensitive Ion Mobility Spectrometry and Mass Spectrometry Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Orton, Daniel J. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Tfaily, Malak M. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Moore, Ronald J. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; LaMarche, Brian L. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Zheng, Xueyun [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Fillmore, Thomas L. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Chu, Rosalie K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Weitz, Karl K. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Monroe, Matthew E. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Kelly, Ryan T. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Smith, Richard D. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States; Baker, Erin S. [Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA 99352, United States

    2017-12-13

    To better understand disease conditions and environmental perturbations, multi-omic studies (i.e. proteomic, lipidomic, metabolomic, etc. analyses) are vastly increasing in popularity. In a multi-omic study, a single sample is typically extracted in multiple ways and numerous analyses are performed using different instruments. Thus, one sample becomes many analyses, making high throughput and reproducible evaluations a necessity. One way to address the numerous samples and varying instrumental conditions is to utilize a flow injection analysis (FIA) system for rapid sample injection. While some FIA systems have been created to address these challenges, many have limitations such as high consumable costs, low pressure capabilities, limited pressure monitoring and fixed flow rates. To address these limitations, we created an automated, customizable FIA system capable of operating at diverse flow rates (~50 nL/min to 500 µL/min) to accommodate low- and high-flow instrument sources. This system can also operate at varying analytical throughputs from 24 to 1200 samples per day to enable different MS analysis approaches. Applications ranging from native protein analyses to molecular library construction were performed using the FIA system. The results from these studies showed a highly robust platform, providing consistent performance over many days without carryover as long as washing buffers specific to each molecular analysis were utilized.

  9. An automated, high-throughput plant phenotyping system using machine learning-based plant segmentation and image analysis.

    Science.gov (United States)

    Lee, Unseok; Chang, Sungyul; Putra, Gian Anantrio; Kim, Hyoungseok; Kim, Dong Hwan

    2018-01-01

    A high-throughput plant phenotyping system automatically observes and grows many plant samples. Many plant sample images are acquired by the system to determine the characteristics of the plants (populations). Stable image acquisition and processing is very important to accurately determine the characteristics. However, hardware for acquiring plant images rapidly and stably, while minimizing plant stress, is lacking. Moreover, most software cannot adequately handle large-scale plant imaging. To address these problems, we developed a new, automated, high-throughput plant phenotyping system using simple and robust hardware, and an automated plant-imaging-analysis pipeline consisting of machine-learning-based plant segmentation. Our hardware acquires images reliably and quickly and minimizes plant stress. Furthermore, the images are processed automatically. In particular, large-scale plant-image datasets can be segmented precisely using a classifier developed using a superpixel-based machine-learning algorithm (Random Forest), and variations in plant parameters (such as area) over time can be assessed using the segmented images. We performed comparative evaluations to identify an appropriate learning algorithm for our proposed system, and tested three robust learning algorithms. We developed not only an automatic analysis pipeline but also a convenient means of plant-growth analysis that provides a learning data interface and visualization of plant growth trends. Thus, our system allows end-users such as plant biologists to analyze plant growth via large-scale plant image data easily.

  10. Recent Advances in Nanobiotechnology and High-Throughput Molecular Techniques for Systems Biomedicine

    Science.gov (United States)

    Kim, Eung-Sam; Ahn, Eun Hyun; Chung, Euiheon; Kim, Deok-Ho

    2013-01-01

    Nanotechnology-based tools are beginning to emerge as promising platforms for quantitative high-throughput analysis of live cells and tissues. Despite unprecedented progress made over the last decade, a challenge still lies in integrating emerging nanotechnology-based tools into macroscopic biomedical apparatuses for practical purposes in biomedical sciences. In this review, we discuss the recent advances and limitations in the analysis and control of mechanical, biochemical, fluidic, and optical interactions in the interface areas of nanotechnology-based materials and living cells in both in vitro and in vivo settings. PMID:24258011

  11. High resolution light-sheet based high-throughput imaging cytometry system enables visualization of intra-cellular organelles

    Science.gov (United States)

    Regmi, Raju; Mohan, Kavya; Mondal, Partha Pratim

    2014-09-01

    Visualization of intracellular organelles is achieved using a newly developed high throughput imaging cytometry system. This system interrogates the microfluidic channel using a sheet of light rather than the existing point-based scanning techniques. The advantages of the developed system are many, including, single-shot scanning of specimens flowing through the microfluidic channel at flow rate ranging from micro- to nano- lit./min. Moreover, this opens-up in-vivo imaging of sub-cellular structures and simultaneous cell counting in an imaging cytometry system. We recorded a maximum count of 2400 cells/min at a flow-rate of 700 nl/min, and simultaneous visualization of fluorescently-labeled mitochondrial network in HeLa cells during flow. The developed imaging cytometry system may find immediate application in biotechnology, fluorescence microscopy and nano-medicine.

  12. Optimal size of stochastic Hodgkin-Huxley neuronal systems for maximal energy efficiency in coding pulse signals

    Science.gov (United States)

    Yu, Lianchun; Liu, Liwei

    2014-03-01

    The generation and conduction of action potentials (APs) represents a fundamental means of communication in the nervous system and is a metabolically expensive process. In this paper, we investigate the energy efficiency of neural systems in transferring pulse signals with APs. By analytically solving a bistable neuron model that mimics the AP generation with a particle crossing the barrier of a double well, we find the optimal number of ion channels that maximizes the energy efficiency of a neuron. We also investigate the energy efficiency of a neuron population in which the input pulse signals are represented with synchronized spikes and read out with a downstream coincidence detector neuron. We find an optimal number of neurons in neuron population, as well as the number of ion channels in each neuron that maximizes the energy efficiency. The energy efficiency also depends on the characters of the input signals, e.g., the pulse strength and the interpulse intervals. These results are confirmed by computer simulation of the stochastic Hodgkin-Huxley model with a detailed description of the ion channel random gating. We argue that the tradeoff between signal transmission reliability and energy cost may influence the size of the neural systems when energy use is constrained.

  13. Maximization of primary energy savings of solar heating and cooling systems by transient simulations and computer design of experiments

    International Nuclear Information System (INIS)

    Calise, F.; Palombo, A.; Vanoli, L.

    2010-01-01

    In this paper, the simulation of the performance of solar-assisted heating and cooling systems is analyzed. Three different plant layouts are considered: (i) the first one consists of evacuated solar collectors and a single-stage LiBr-H 2 O absorption chiller; here in order to integrate the system in case of insufficient solar radiation, an electric water-cooled chiller is activated; (ii) configuration of the secondly considered system is similar to the first one, but the absorption chiller and the solar collector area are sized for balancing about 30% of the building cooling load only; (iii) the layout of the thirdly considered system differs from the first one since the auxiliary electric chiller is replaced by a gas-fired heater. Such system configurations also include: circulation pumps, storage tanks, feedback controllers, mixers, diverters and on/off hysteresis controllers. All such devices are modelled for maximizing the system energy efficiency. In order to simulate the systems' performance for dynamic heating/cooling loads, a single-lumped capacitance building is also modelled and implemented in the computer code. A cost model is also developed in order to calculate the systems' operating and capital costs. All the models and the relative simulations are carried out by TRNSYS. A design of experiment procedure is also included. By such tool the effects of the system operating parameters' variation on the relative energy efficiency are analyzed. In addition, the set of synthesis/design variables maximizing the system's energetic performance can be also identified. The annual primary energy saving is chosen as the optimization objective function, whereas collector slope, pump flows, set-point temperatures and tank volume are selected as optimizing system design variables. A case study was developed for an office building located in South Italy. Here, the energetic and the economic analysis for all the three considered system layouts are carried out. The

  14. Empirical investigation on the dependence of TCP downstream throughput on SNR in an IEEE802.11b WLAN system

    Directory of Open Access Journals (Sweden)

    Ikponmwosa Oghogho

    2017-04-01

    Full Text Available The dependence of TCP downstream throughput (TCPdownT on signal to noise ratio (SNR in an IEEE802.11b WLAN system was investigated in various environments and varieties of QoS traffic. TCPdownT was measured for various SNR observed. An Infrastructure based IEEE802.11b WLAN system having networked computers on which measurement software were installed, was set up consecutively in various environments (open corridor, small offices with block walls and plaster boards and free space. Empirical models describing TCPdownT against SNR for different signal ranges (all ranges of signals, strong signals only, grey signals only and weak signals only were statistically generated and validated. As the SNR values changed from high (strong signals through low (grey signals to very low (weak signals, our results show a strong dependence of TCPdownT on the received SNR. Our models showed lower RMS errors when compared with other similar models. We observed RMS errors of 0.6734791 Mbps, 0.472209 Mbps, 0.9111563 Mbps and 0.5764460 Mbps for general (all SNR model, strong signals model, grey signals model and Weak signals model respectively. Our models will provide researchers and WLAN systems users with a tool to estimate the TCP downstream throughput in a real network in various environments by monitoring the received SNR.

  15. EVpedia: an integrated database of high-throughput data for systemic analyses of extracellular vesicles

    Directory of Open Access Journals (Sweden)

    Dae-Kyum Kim

    2013-03-01

    Full Text Available Secretion of extracellular vesicles is a general cellular activity that spans the range from simple unicellular organisms (e.g. archaea; Gram-positive and Gram-negative bacteria to complex multicellular ones, suggesting that this extracellular vesicle-mediated communication is evolutionarily conserved. Extracellular vesicles are spherical bilayered proteolipids with a mean diameter of 20–1,000 nm, which are known to contain various bioactive molecules including proteins, lipids, and nucleic acids. Here, we present EVpedia, which is an integrated database of high-throughput datasets from prokaryotic and eukaryotic extracellular vesicles. EVpedia provides high-throughput datasets of vesicular components (proteins, mRNAs, miRNAs, and lipids present on prokaryotic, non-mammalian eukaryotic, and mammalian extracellular vesicles. In addition, EVpedia also provides an array of tools, such as the search and browse of vesicular components, Gene Ontology enrichment analysis, network analysis of vesicular proteins and mRNAs, and a comparison of vesicular datasets by ortholog identification. Moreover, publications on extracellular vesicle studies are listed in the database. This free web-based database of EVpedia (http://evpedia.info might serve as a fundamental repository to stimulate the advancement of extracellular vesicle studies and to elucidate the novel functions of these complex extracellular organelles.

  16. Green throughput taxation

    International Nuclear Information System (INIS)

    Bruvoll, A.; Ibenholt, K.

    1998-01-01

    According to optimal taxation theory, raw materials should be taxed to capture the embedded scarcity rent in their value. To reduce both natural resource use and the corresponding emissions, or the throughput in the economic system, the best policy may be a tax on material inputs. As a first approach to throughput taxation, this paper considers a tax on intermediates in the framework of a dynamic computable general equilibrium model with environmental feedbacks. To balance the budget, payroll taxes are reduced. As a result, welfare indicators as material consumption and leisure time consumption are reduced, while on the other hand all the environmental indicators improve. 27 refs

  17. Serial isoelectric focusing as an effective and economic way to obtain maximal resolution and high-throughput in 2D-based comparative proteomics of scarce samples: proof-of-principle.

    Science.gov (United States)

    Farhoud, Murtada H; Wessels, Hans J C T; Wevers, Ron A; van Engelen, Baziel G; van den Heuvel, Lambert P; Smeitink, Jan A

    2005-01-01

    In 2D-based comparative proteomics of scarce samples, such as limited patient material, established methods for prefractionation and subsequent use of different narrow range IPG strips to increase overall resolution are difficult to apply. Also, a high number of samples, a prerequisite for drawing meaningful conclusions when pathological and control samples are considered, will increase the associated amount of work almost exponentially. Here, we introduce a novel, effective, and economic method designed to obtain maximum 2D resolution while maintaining the high throughput necessary to perform large-scale comparative proteomics studies. The method is based on connecting different IPG strips serially head-to-tail so that a complete line of different IPG strips with sequential pH regions can be focused in the same experiment. We show that when 3 IPG strips (covering together the pH range of 3-11) are connected head-to-tail an optimal resolution is achieved along the whole pH range. Sample consumption, time required, and associated costs are reduced by almost 70%, and the workload is reduced significantly.

  18. High throughput automated microbial bioreactor system used for clone selection and rapid scale-down process optimization.

    Science.gov (United States)

    Velez-Suberbie, M Lourdes; Betts, John P J; Walker, Kelly L; Robinson, Colin; Zoro, Barney; Keshavarz-Moore, Eli

    2018-01-01

    High throughput automated fermentation systems have become a useful tool in early bioprocess development. In this study, we investigated a 24 x 15 mL single use microbioreactor system, ambr 15f, designed for microbial culture. We compared the fed-batch growth and production capabilities of this system for two Escherichia coli strains, BL21 (DE3) and MC4100, and two industrially relevant molecules, hGH and scFv. In addition, different carbon sources were tested using bolus, linear or exponential feeding strategies, showing the capacity of the ambr 15f system to handle automated feeding. We used power per unit volume (P/V) as a scale criterion to compare the ambr 15f with 1 L stirred bioreactors which were previously scaled-up to 20 L with a different biological system, thus showing a potential 1,300 fold scale comparability in terms of both growth and product yield. By exposing the cells grown in the ambr 15f system to a level of shear expected in an industrial centrifuge, we determined that the cells are as robust as those from a bench scale bioreactor. These results provide evidence that the ambr 15f system is an efficient high throughput microbial system that can be used for strain and molecule selection as well as rapid scale-up. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers Biotechnol. Prog., 34:58-68, 2018. © 2017 The Authors Biotechnology Progress published by Wiley Periodicals, Inc. on behalf of American Institute of Chemical Engineers.

  19. High-Throughput Silencing Using the CRISPR-Cas9 System: A Review of the Benefits and Challenges.

    Science.gov (United States)

    Wade, Mark

    2015-09-01

    The clustered regularly interspaced short palindromic repeats (CRISPR)/Cas system has been seized upon with a fervor enjoyed previously by small interfering RNA (siRNA) and short hairpin RNA (shRNA) technologies and has enormous potential for high-throughput functional genomics studies. The decision to use this approach must be balanced with respect to adoption of existing platforms versus awaiting the development of more "mature" next-generation systems. Here, experience from siRNA and shRNA screening plays an important role, as issues such as targeting efficiency, pooling strategies, and off-target effects with those technologies are already framing debates in the CRISPR field. CRISPR/Cas can be exploited not only to knockout genes but also to up- or down-regulate gene transcription-in some cases in a multiplex fashion. This provides a powerful tool for studying the interaction among multiple signaling cascades in the same genetic background. Furthermore, the documented success of CRISPR/Cas-mediated gene correction (or the corollary, introduction of disease-specific mutations) provides proof of concept for the rapid generation of isogenic cell lines for high-throughput screening. In this review, the advantages and limitations of CRISPR/Cas are discussed and current and future applications are highlighted. It is envisaged that complementarities between CRISPR, siRNA, and shRNA will ensure that all three technologies remain critical to the success of future functional genomics projects. © 2015 Society for Laboratory Automation and Screening.

  20. The maximization of the efficiency in the energy conversion in isolated photovoltaic systems; Tecnicas de maxima transferencia de potencia em sistemas fotovoltaicos isolados

    Energy Technology Data Exchange (ETDEWEB)

    Machado-Neto, L. V. B.; Cabral, C. V. T.; Diniz, A. S. A. C.; Cortizo, P. C.; Oliveira-Filho, D.

    2004-07-01

    The maximization of the efficiency in the energy conversion is essential into the developing of technical and economic sustainability of photovoltaic solar energy systems. In this paper is realized the study of a power maximization technique for photovoltaic generators. The power maximization technique explored in this paper is the Maximum Power Point Tracking (MPPT). There are different strategies being studied currently; this work consists of the development of an electronic converter prototype for MPPT, including the developing of the tracking algorithm implemented in a microcontroller. It is also realized a simulation of the system and a prototype was assembled and the first results are presented here. (Author)

  1. Development of digital dashboard system for medical practice: maximizing efficiency of medical information retrieval and communication.

    Science.gov (United States)

    Lee, Kee Hyuck; Yoo, Sooyoung; Shin, HoGyun; Baek, Rong-Min; Chung, Chin Youb; Hwang, Hee

    2013-01-01

    It is reported that digital dashboard systems in hospitals provide a user interface (UI) that can centrally manage and retrieve various information related to patients in a single screen, support the decision-making of medical professionals on a real time basis by integrating the scattered medical information systems and core work flows, enhance the competence and decision-making ability of medical professionals, and reduce the probability of misdiagnosis. However, the digital dashboard systems of hospitals reported to date have some limitations when medical professionals use them to generally treat inpatients, because those were limitedly used for the work process of certain departments or developed to improve specific disease-related indicators. Seoul National University Bundang Hospital developed a new concept of EMR system to overcome such limitations. The system allows medical professionals to easily access all information on inpatients and effectively retrieve important information from any part of the hospital by displaying inpatient information in the form of digital dashboard. In this study, we would like to introduce the structure, development methodology and the usage of our new concept.

  2. Maximizing land productivity by diversified cropping systems with different nitrogen fertilizer types

    Directory of Open Access Journals (Sweden)

    Abd El-Hafeez Ahmed ZOHRY

    2017-12-01

    Full Text Available Six field experiments were conducted in Giza Agricultural Research Station, Egypt during 2010, 2011 and 2012 growing seasons to study the effect of two types of N fertilizers (urea and urea form as slow-release (UF on intercropping cowpea with sunflower and intercropping wheat with pea. A split plot design with three replications was used. The results indicated that insignificant effect of cropping systems was found for sunflower and significant effect was found for cowpea yield. Significant effect of N fertilizers was found on sunflower and insignificant effect was found for cowpea yield. Furthermore, insignificant effect of interaction of cropping systems and N fertilizers was found for sunflower and significant effect was found for cowpea yield. With respect to wheat and pea intercropping, both crops were significantly affected by intercropping system. Significant effect of N fertilizers was found on wheat and insignificant effect was found for pea yield. Both wheat and pea were significantly affected by the interaction of cropping system and N fertilizers. Yield advantage was achieved because land equivalent ratio exceeded 1.00. Dominance analysis proved that leguminous crop is dominated component. Thus, the studied intercropping systems could be recommended to farmers due to its beneficial returns.

  3. Macrocell Builder: IP-Block-Based Design Environment for High-Throughput VLSI Dedicated Digital Signal Processing Systems

    Directory of Open Access Journals (Sweden)

    Urard Pascal

    2006-01-01

    Full Text Available We propose an efficient IP-block-based design environment for high-throughput VLSI systems. The flow generates SystemC register-transfer-level (RTL architecture, starting from a Matlab functional model described as a netlist of functional IP. The refinement model inserts automatically control structures to manage delays induced by the use of RTL IPs. It also inserts a control structure to coordinate the execution of parallel clocked IP. The delays may be managed by registers or by counters included in the control structure. The flow has been used successfully in three real-world DSP systems. The experimentations show that the approach can produce efficient RTL architecture and allows to save huge amount of time.

  4. Maximizing the number of satisfied subscribers in Pub/Sub systems under capacity constraints

    NARCIS (Netherlands)

    Setty, V.J.; Kreitz, G; Urdaneta, G; Vitenberg, R; van Steen, M.R.

    2014-01-01

    Publish/subscribe (pub/sub) is a popular communication paradigm in the design of large-scale distributed systems. A provider of a pub/sub service (whether centralized, peer-assisted, or based on a federated organization of cooperatively managed servers) commonly faces a fundamental challenge: given

  5. Wireless EEG System Achieving High Throughput and Reduced Energy Consumption Through Lossless and Near-Lossless Compression.

    Science.gov (United States)

    Alvarez, Guillermo Dufort Y; Favaro, Federico; Lecumberry, Federico; Martin, Alvaro; Oliver, Juan P; Oreggioni, Julian; Ramirez, Ignacio; Seroussi, Gadiel; Steinfeld, Leonardo

    2018-02-01

    This work presents a wireless multichannel electroencephalogram (EEG) recording system featuring lossless and near-lossless compression of the digitized EEG signal. Two novel, low-complexity, efficient compression algorithms were developed and tested in a low-power platform. The algorithms were tested on six public EEG databases comparing favorably with the best compression rates reported up to date in the literature. In its lossless mode, the platform is capable of encoding and transmitting 59-channel EEG signals, sampled at 500 Hz and 16 bits per sample, at a current consumption of 337 A per channel; this comes with a guarantee that the decompressed signal is identical to the sampled one. The near-lossless mode allows for significant energy savings and/or higher throughputs in exchange for a small guaranteed maximum per-sample distortion in the recovered signal. Finally, we address the tradeoff between computation cost and transmission savings by evaluating three alternatives: sending raw data, or encoding with one of two compression algorithms that differ in complexity and compression performance. We observe that the higher the throughput (number of channels and sampling rate) the larger the benefits obtained from compression.

  6. Optimization of hybrid imaging systems based on maximization of kurtosis of the restored point spread function

    DEFF Research Database (Denmark)

    Demenikov, Mads

    2011-01-01

    to optimization results based on full-reference image measures of restored images. In comparison with full-reference measures, the kurtosis measure is fast to compute and requires no images, noise distributions, or alignment of restored images, but only the signal-to-noise-ratio. © 2011 Optical Society of America.......I propose a novel, but yet simple, no-reference, objective image quality measure based on the kurtosis of the restored point spread function. Using this measure, I optimize several phase masks for extended-depth-of-field in hybrid imaging systems and obtain results that are identical...

  7. High-throughput ultra high performance liquid chromatography combined with mass spectrometry approach for the rapid analysis and characterization of multiple constituents of the fruit of Acanthopanax senticosus (Rupr. et Maxim.) Harms.

    Science.gov (United States)

    Han, Yue; Zhang, Aihua; Sun, Hui; Zhang, Yingzhi; Meng, Xiangcai; Yan, Guangli; Liu, Liang; Wang, Xijun

    2017-05-01

    Acanthopanax senticosus (Rupr. et Maxim.) Harms, a traditional Chinese medicine, has been widely used to improve the function of skeleton, heart, spleen and kidney. This fruit is rich in nutrients, but the chemical constituents of Acanthopanax senticosus fruit are still unclear. A rapid method based on ultra high performance liquid chromatography with time-of-flight mass spectrometry was developed for the compound analysis of Acanthopanax senticosus fruit in vitro and in vivo. In this study, the Acanthopanax senticosus fruit could significantly increase the weight of immune organs, promote the proliferation of lymphatic T cells, regulate the lymphatic B cell function, and decrease the ability of natural killer cells. A total of 104 compounds of Acanthopanax senticosus fruit including lignans, flavones, triterpenoidsaponins, phenolic acids, and other constituents were identified. Among them, seven chemical compounds were reported for the first time in the Acanthopanax senticosus fruit. Compared with the serum sample of blank and dosed samples, 24 prototype compositions were characterized. The results of our experiment could be helpful to understand the complex compounds of Acanthopanax senticosus fruit in vitro and in vivo for further pharmacological activity studies. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Maximizing commonality between military and general aviation fly-by-light helicopter system designs

    Science.gov (United States)

    Enns, Russell; Mossman, David C.

    1995-05-01

    In the face of shrinking defense budgets, survival of the United States rotorcraft industry is becoming increasingly dependent on increased sales in a highly competitive civil helicopter market. As a result, only the most competitive rotorcraft manufacturers are likely to survive. A key ingredient in improving our competitive position is the ability to produce more versatile, high performance, high quality, and low cost of ownership helicopters. Fiber optic technology offers a path of achieving these objectives. Also, adopting common components and architectures for different helicopter models (while maintaining each models' uniqueness) will further decrease design and production costs. Funds saved (or generated) by exploiting this commonality can be applied to R&D used to further improve the product. In this paper, we define a fiber optics based avionics architecture which provides the pilot a fly-by-light / digital flight control system which can be implemented in both civilian and military helicopters. We then discuss the advantages of such an architecture.

  9. Is the aim of the English health care system to maximize QALYs?

    Science.gov (United States)

    Shah, Koonal; Praet, Cecile; Devlin, Nancy; Sussex, Jonathan; Appleby, John; Parkin, David

    2012-07-01

    To compare the types of benefit considered relevant by the English Department of Health with those included by the National Institute for Health and Clinical Excellence (NICE) when conducting economic evaluations of options for spending limited health care resources. We analysed all policy Impact Assessments (IAs) carried out by the Department of Health (DH) in 2008 and 2009. The stated benefits of each policy were extracted and thematic analysis was used to categorise these. 51 Impact Assessments were analysed, eight of which mentioned quality-adjusted life year (QALY) gains as a benefit. 18 benefits other than QALY gains were identified. Apart from improving health outcomes, commonly referred to benefits included: reducing costs, improving quality of care, and enhancing patient experience. Many of the policies reviewed were implemented on the basis of benefits unrelated to health outcome. The methods being used to apply a monetary valuation to QALY gains (in cost-benefit calculations) are not consistent across Impact Assessments or with NICE's stated threshold range. The Department of Health and NICE approach resource allocation decisions in different ways, based upon overlapping but not congruent considerations and underlying principles. Given that all these decisions affect the allocation of the same fixed health care budget, there is a case for establishing a uniform framework for option appraisal and priority setting so as to avoid allocative inefficiency. The same applies to any other national health care system.

  10. Information-guided transmission in decode-and-forward relaying systems: Spatial exploitation and throughput enhancement

    KAUST Repository

    Yang, Yuli

    2011-07-01

    In addressing the issue of achieving high throughput in half-duplex relay channels, we exploit a concept of information-guided transmission for the network consisting of a source node, a destination node, and multiple half-duplex relay nodes. For further benefiting from multiple relay nodes, the relay-selection patterns are defined as the arbitrary combinations of given relay nodes. By exploiting the difference among the spatial channels states, in each relay-help transmission additional information to be forwarded is mapped onto the index of the active relay-selection pattern besides the basic information mapped onto the traditional constellation, which is forwarded by the relay node(s) in the active relay-selection pattern, so as to enhance the relay throughtput. With iterative decoding, the destination node can achieve a robust detection by decoupling the signals forwarded in different ways. We investigate the proposed scheme considering "decode-and-forward" protocol and establish its achievable transmission rate. The analytical results on capacity behaviors prove the efficiency of the proposed scheme by showing that it achieves better capacity performance than the conventional scheme. © 2011 IEEE.

  11. Development of High Throughput Salt Separation System with Integrated Liquid Salt Separation - Salt Distillation Assembly

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Sangwoon; Park, K. M.; Kim, J. G.; Jeong, J. H.; Lee, S. J.; Park, S. B.; Kim, S. S.

    2013-01-15

    The capacity of a salt distiller should be sufficiently large to reach the throughput of uranium electro-refining process. In this study, an assembly composing a liquid separation sieve and a distillation crucible was developed for the sequential operation of a liquid salt separation and a vacuum distillation in the same tower. The feasibility of the sequential salt separation was examined by the rotation test of the sieve-crucible assembly and sequential operation of a liquid salt separation and a vacuum distillation. The adhered salt in the uranium deposits was removed successfully. The salt content in the deposits was below 0.1 wt% after the sequential operation of the liquid salt separation - salt distillation. From the results of this study, it could be concluded that efficient salt separation can be realized by the sequential operation of liquid salt separation and vacuum distillation in one distillation tower since the operation procedures are simplified and no extra operation of cooling and reheating is necessary.

  12. A high-throughput microfluidic dental plaque biofilm system to visualize and quantify the effect of antimicrobials

    Science.gov (United States)

    Nance, William C.; Dowd, Scot E.; Samarian, Derek; Chludzinski, Jeffrey; Delli, Joseph; Battista, John; Rickard, Alexander H.

    2013-01-01

    Objectives Few model systems are amenable to developing multi-species biofilms in parallel under environmentally germane conditions. This is a problem when evaluating the potential real-world effectiveness of antimicrobials in the laboratory. One such antimicrobial is cetylpyridinium chloride (CPC), which is used in numerous over-the-counter oral healthcare products. The aim of this work was to develop a high-throughput microfluidic system that is combined with a confocal laser scanning microscope (CLSM) to quantitatively evaluate the effectiveness of CPC against oral multi-species biofilms grown in human saliva. Methods Twenty-four-channel BioFlux microfluidic plates were inoculated with pooled human saliva and fed filter-sterilized saliva for 20 h at 37°C. The bacterial diversity of the biofilms was evaluated by bacterial tag-encoded FLX amplicon pyrosequencing (bTEFAP). The antimicrobial/anti-biofilm effect of CPC (0.5%–0.001% w/v) was examined using Live/Dead stain, CLSM and 3D imaging software. Results The analysis of biofilms by bTEFAP demonstrated that they contained genera typically found in human dental plaque. These included Aggregatibacter, Fusobacterium, Neisseria, Porphyromonas, Streptococcus and Veillonella. Using Live/Dead stain, clear gradations in killing were observed when the biofilms were treated with CPC between 0.5% and 0.001% w/v. At 0.5% (w/v) CPC, 90% of the total signal was from dead/damaged cells. Below this concentration range, less killing was observed. In the 0.5%–0.05% (w/v) range CPC penetration/killing was greatest and biofilm thickness was significantly reduced. Conclusions This work demonstrates the utility of a high-throughput microfluidic–CLSM system to grow multi-species oral biofilms, which are compositionally similar to naturally occurring biofilms, to assess the effectiveness of antimicrobials. PMID:23800904

  13. Maximizing the benefit of health workforce secondment in Botswana: an approach for strengthening health systems in resource-limited settings

    Directory of Open Access Journals (Sweden)

    Grignon JS

    2014-05-01

    Full Text Available Jessica S Grignon,1,2 Jenny H Ledikwe,1,2 Ditsapelo Makati,2 Robert Nyangah,2 Baraedi W Sento,2 Bazghina-werq Semo1,2 1Department of Global Health, University of Washington, Seattle, WA, USA; 2International Training and Education Center for Health, Gaborone, Botswana Abstract: To address health systems challenges in limited-resource settings, global health initiatives, particularly the President's Emergency Plan for AIDS Relief, have seconded health workers to the public sector. Implementation considerations for secondment as a health workforce development strategy are not well documented. The purpose of this article is to present outcomes, best practices, and lessons learned from a President's Emergency Plan for AIDS Relief-funded secondment program in Botswana. Outcomes are documented across four World Health Organization health systems' building blocks. Best practices include documentation of joint stakeholder expectations, collaborative recruitment, and early identification of counterparts. Lessons learned include inadequate ownership, a two-tier employment system, and ill-defined position duration. These findings can inform program and policy development to maximize the benefit of health workforce secondment. Secondment requires substantial investment, and emphasis should be placed on high-level technical positions responsible for building systems, developing health workers, and strengthening government to translate policy into programs. Keywords: human resources, health policy, health worker, HIV/AIDS, PEPFAR

  14. Development of high-throughput analysis system using highly-functional organic polymer monoliths

    International Nuclear Information System (INIS)

    Umemura, Tomonari; Kojima, Norihisa; Ueki, Yuji

    2008-01-01

    The growing demand for high-throughput analysis in the current competitive life sciences and industries has promoted the development of high-speed HPLC techniques and tools. As one of such tools, monolithic columns have attracted increasing attention and interest in the last decade due to the low flow-resistance and excellent mass transfer, allowing for rapid separations and reactions at high flow rates with minimal loss of column efficiency. Monolithic materials are classified into two main groups: silica- and organic polymer-based monoliths, each with their own advantages and disadvantages. Organic polymer monoliths have several distinct advantages in life-science research, including wide pH stability, less irreversible adsorption, facile preparation and modification. Thus, we have so far tried to develop organic polymer monoliths for various chemical operations, such as separation, extraction, preconcentration, and reaction. In the present paper, recent progress in the development of organic polymer monoliths is discussed. Especially, the procedure for the preparation of methacrylate-based monoliths with various functional groups is described, where the influence of different compositional and processing parameters on the monolithic structure is also addressed. Furthermore, the performance of the produced monoliths is demonstrated through the results for (1) rapid separations of alklybenzenes at high flow rates, (2) flow-through enzymatic digestion of cytochrome c on a trypsin-immobilized monolithic column, and (3) separation of the tryptic digest on a reversed-phase monolithic column. The flexibility and versatility of organic polymer monoliths will be beneficial for further enhancing analytical performance, and will open the way for new applications and opportunities both in scientific and industrial research. (author)

  15. Systems biology of bacterial nitrogen fixation: High-throughput technology and its integrative description with constraint-based modeling

    Directory of Open Access Journals (Sweden)

    Resendis-Antonio Osbaldo

    2011-07-01

    Full Text Available Abstract Background Bacterial nitrogen fixation is the biological process by which atmospheric nitrogen is uptaken by bacteroids located in plant root nodules and converted into ammonium through the enzymatic activity of nitrogenase. In practice, this biological process serves as a natural form of fertilization and its optimization has significant implications in sustainable agricultural programs. Currently, the advent of high-throughput technology supplies with valuable data that contribute to understanding the metabolic activity during bacterial nitrogen fixation. This undertaking is not trivial, and the development of computational methods useful in accomplishing an integrative, descriptive and predictive framework is a crucial issue to decoding the principles that regulated the metabolic activity of this biological process. Results In this work we present a systems biology description of the metabolic activity in bacterial nitrogen fixation. This was accomplished by an integrative analysis involving high-throughput data and constraint-based modeling to characterize the metabolic activity in Rhizobium etli bacteroids located at the root nodules of Phaseolus vulgaris (bean plant. Proteome and transcriptome technologies led us to identify 415 proteins and 689 up-regulated genes that orchestrate this biological process. Taking into account these data, we: 1 extended the metabolic reconstruction reported for R. etli; 2 simulated the metabolic activity during symbiotic nitrogen fixation; and 3 evaluated the in silico results in terms of bacteria phenotype. Notably, constraint-based modeling simulated nitrogen fixation activity in such a way that 76.83% of the enzymes and 69.48% of the genes were experimentally justified. Finally, to further assess the predictive scope of the computational model, gene deletion analysis was carried out on nine metabolic enzymes. Our model concluded that an altered metabolic activity on these enzymes induced

  16. High Throughput Plasma Water Treatment

    Science.gov (United States)

    Mujovic, Selman; Foster, John

    2016-10-01

    The troublesome emergence of new classes of micro-pollutants, such as pharmaceuticals and endocrine disruptors, poses challenges for conventional water treatment systems. In an effort to address these contaminants and to support water reuse in drought stricken regions, new technologies must be introduced. The interaction of water with plasma rapidly mineralizes organics by inducing advanced oxidation in addition to other chemical, physical and radiative processes. The primary barrier to the implementation of plasma-based water treatment is process volume scale up. In this work, we investigate a potentially scalable, high throughput plasma water reactor that utilizes a packed bed dielectric barrier-like geometry to maximize the plasma-water interface. Here, the water serves as the dielectric medium. High-speed imaging and emission spectroscopy are used to characterize the reactor discharges. Changes in methylene blue concentration and basic water parameters are mapped as a function of plasma treatment time. Experimental results are compared to electrostatic and plasma chemistry computations, which will provide insight into the reactor's operation so that efficiency can be assessed. Supported by NSF (CBET 1336375).

  17. High throughput detection of Coxiella burnetii by real-time PCR with internal control system and automated DNA preparation

    Directory of Open Access Journals (Sweden)

    Kramme Stefanie

    2008-05-01

    Full Text Available Abstract Background Coxiella burnetii is the causative agent of Q-fever, a widespread zoonosis. Due to its high environmental stability and infectivity it is regarded as a category B biological weapon agent. In domestic animals infection remains either asymptomatic or presents as infertility or abortion. Clinical presentation in humans can range from mild flu-like illness to acute pneumonia and hepatitis. Endocarditis represents the most common form of chronic Q-fever. In humans serology is the gold standard for diagnosis but is inadequate for early case detection. In order to serve as a diagnostic tool in an eventual biological weapon attack or in local epidemics we developed a real-time 5'nuclease based PCR assay with an internal control system. To facilitate high-throughput an automated extraction procedure was evaluated. Results To determine the minimum number of copies that are detectable at 95% chance probit analysis was used. Limit of detection in blood was 2,881 copies/ml [95%CI, 2,188–4,745 copies/ml] with a manual extraction procedure and 4,235 copies/ml [95%CI, 3,143–7,428 copies/ml] with a fully automated extraction procedure, respectively. To demonstrate clinical application a total of 72 specimens of animal origin were compared with respect to manual and automated extraction. A strong correlation between both methods was observed rendering both methods suitable. Testing of 247 follow up specimens of animal origin from a local Q-fever epidemic rendered real-time PCR more sensitive than conventional PCR. Conclusion A sensitive and thoroughly evaluated real-time PCR was established. Its high-throughput mode may show a useful approach to rapidly screen samples in local outbreaks for other organisms relevant for humans or animals. Compared to a conventional PCR assay sensitivity of real-time PCR was higher after testing samples from a local Q-fever outbreak.

  18. Modus operandi for maximizing energy efficiency and increasing permeate flux of community scale solar powered reverse osmosis systems

    International Nuclear Information System (INIS)

    Vyas, Harsh; Suthar, Krunal; Chauhan, Mehul; Jani, Ruchita; Bapat, Pratap; Patel, Pankaj; Markam, Bhupendra; Maiti, Subarna

    2015-01-01

    Highlights: • Experimental data on energy efficient photovoltaic powered reverse osmosis system. • Synergetic management of electrical, thermal and hydraulic energies. • Use of reflectors, heat exchanger and turgo turbine. - Abstract: Photovoltaic powered reverse osmosis systems can only be made cost effective if they are made highly energy efficient. In this work we describe a protocol to maximize energy efficiency and increase permeate flux in a fully integrated installation of such a system. The improved system consisted of (i) photovoltaic array fitted with suitably positioned and aligned North–South V-trough reflectors to enhance power output from the array; (ii) direct contact heat exchanger fitted on the rear of the photovoltaic modules for active cooling of the same while safeguarding the terminals from short-circuit and corrosion; (iii) use of reverse osmosis feed water as heat exchange medium while taking due care to limit the temperature rise of feed water; (iv) enhancing permeate flux through the rise in feed water temperature; (v) turgo-turbine for conversion of hydraulic energy in reverse osmosis reject water into mechanical energy to provide part of the energy to replace booster pump utilized in the reverse osmosis unit. The V-trough reflectors onto the photovoltaic modules with thermal energy recovery system brought about an increase in power output of 40% and the synergistic effect of (i)–(iv) gave rise to total permeate volume boost of 59%. Integration of (v) resulted in 56% and 26% saving of electrical power when the reverse osmosis plant was operated by battery bank and direct photovoltaic array respectively

  19. Maximal violation of a bipartite three-setting, two-outcome Bell inequality using infinite-dimensional quantum systems

    International Nuclear Information System (INIS)

    Pal, Karoly F.; Vertesi, Tamas

    2010-01-01

    The I 3322 inequality is the simplest bipartite two-outcome Bell inequality beyond the Clauser-Horne-Shimony-Holt (CHSH) inequality, consisting of three two-outcome measurements per party. In the case of the CHSH inequality the maximal quantum violation can already be attained with local two-dimensional quantum systems; however, there is no such evidence for the I 3322 inequality. In this paper a family of measurement operators and states is given which enables us to attain the maximum quantum value in an infinite-dimensional Hilbert space. Further, it is conjectured that our construction is optimal in the sense that measuring finite-dimensional quantum systems is not enough to achieve the true quantum maximum. We also describe an efficient iterative algorithm for computing quantum maximum of an arbitrary two-outcome Bell inequality in any given Hilbert space dimension. This algorithm played a key role in obtaining our results for the I 3322 inequality, and we also applied it to improve on our previous results concerning the maximum quantum violation of several bipartite two-outcome Bell inequalities with up to five settings per party.

  20. Multi-objective optimization for the maximization of the operating share of cogeneration system in District Heating Network

    International Nuclear Information System (INIS)

    Franco, Alessandro; Versace, Michele

    2017-01-01

    Highlights: • Combined Heat and Power plants and civil/residential energy uses. • CHP plant supported by auxiliary boilers and thermal energy storage. • Definition of optimal operational strategies for cogeneration plants for District Heating. • Optimal-sized Thermal Energy Storage and a hybrid operational strategy. • Maximization of cogeneration share and reduction of time of operation of auxiliary boilers. - Abstract: The aim of the paper is to define optimal operational strategies for Combined Heat and Power plants connected to civil/residential District Heating Networks. The role of a reduced number of design variables, including a Thermal Energy Storage system and a hybrid operational strategy dependent on the storage level, is considered. The basic principle is to reach maximum efficiency of the system operation through the utilization of an optimal-sized Thermal Energy Storage. Objective functions of both energetic and combined energetic and economic can be considered. In particular, First and Second Law Efficiency, thermal losses of the storage, number of starts and stops of the combined heat and power unit are considered. Constraints are imposed to nullify the waste of heat and to operate the unit at its maximum efficiency for the highest possible number of consecutive operating hours, until the thermal tank cannot store more energy. The methodology is applied to a detailed case study: a medium size district heating system, in an urban context in the northern Italy, powered by a combined heat and power plant supported by conventional auxiliary boilers. The issues involving this type of thermal loads are also widely investigated in the paper. An increase of Second Law Efficiency of the system of 26% (from 0.35 to 0.44) can be evidenced, while the First Law Efficiency shifts from about 0.74 to 0.84. The optimization strategy permits of combining the economic benefit of cogeneration with the idea of reducing the energy waste and exergy losses.

  1. MACRO: a combined microchip-PCR and microarray system for high-throughput monitoring of genetically modified organisms.

    Science.gov (United States)

    Shao, Ning; Jiang, Shi-Meng; Zhang, Miao; Wang, Jing; Guo, Shu-Juan; Li, Yang; Jiang, He-Wei; Liu, Cheng-Xi; Zhang, Da-Bing; Yang, Li-Tao; Tao, Sheng-Ce

    2014-01-21

    The monitoring of genetically modified organisms (GMOs) is a primary step of GMO regulation. However, there is presently a lack of effective and high-throughput methodologies for specifically and sensitively monitoring most of the commercialized GMOs. Herein, we developed a multiplex amplification on a chip with readout on an oligo microarray (MACRO) system specifically for convenient GMO monitoring. This system is composed of a microchip for multiplex amplification and an oligo microarray for the readout of multiple amplicons, containing a total of 91 targets (18 universal elements, 20 exogenous genes, 45 events, and 8 endogenous reference genes) that covers 97.1% of all GM events that have been commercialized up to 2012. We demonstrate that the specificity of MACRO is ~100%, with a limit of detection (LOD) that is suitable for real-world applications. Moreover, the results obtained of simulated complex samples and blind samples with MACRO were 100% consistent with expectations and the results of independently performed real-time PCRs, respectively. Thus, we believe MACRO is the first system that can be applied for effectively monitoring the majority of the commercialized GMOs in a single test.

  2. Hydrogel Based 3-Dimensional (3D) System for Toxicity and High-Throughput (HTP) Analysis for Cultured Murine Ovarian Follicles

    Science.gov (United States)

    Zhou, Hong; Malik, Malika Amattullah; Arab, Aarthi; Hill, Matthew Thomas; Shikanov, Ariella

    2015-01-01

    Various toxicants, drugs and their metabolites carry potential ovarian toxicity. Ovarian follicles, the functional unit of the ovary, are susceptible to this type of damage at all stages of their development. However, despite of the large scale of potential negative impacts, assays that study ovarian toxicity are limited. Exposure of cultured ovarian follicles to toxicants of interest served as an important tool for evaluation of toxic effects for decades. Mouse follicles cultured on the bottom of a culture dish continue to serve an important approach for mechanistic studies. In this paper, we demonstrated the usefulness of a hydrogel based 3-dimensional (3D) mouse ovarian follicle culture as a tool to study ovarian toxicity in a different setup. The 3D in vitro culture, based on fibrin alginate interpenetrating network (FA-IPN), preserves the architecture of the ovarian follicle and physiological structure-function relationship. We applied the novel 3D high-throughput (HTP) in vitro ovarian follicle culture system to study the ovotoxic effects of an anti-cancer drug, Doxorobucin (DXR). The fibrin component in the system is degraded by plasmin and appears as a clear circle around the encapsulated follicle. The degradation area of the follicle is strongly correlated with follicle survival and growth. To analyze fibrin degradation in a high throughput manner, we created a custom MATLAB® code that converts brightfield micrographs of follicles encapsulated in FA-IPN to binary images, followed by image analysis. We did not observe any significant difference between manually processed images to the automated MATLAB® method, thereby confirming that the automated program is suitable to measure fibrin degradation to evaluate follicle health. The cultured follicles were treated with DXR at concentrations ranging from 0.005 nM to 200 nM, corresponding to the therapeutic plasma levels of DXR in patients. Follicles treated with DXR demonstrated decreased survival rate in

  3. Hydrogel Based 3-Dimensional (3D System for Toxicity and High-Throughput (HTP Analysis for Cultured Murine Ovarian Follicles.

    Directory of Open Access Journals (Sweden)

    Hong Zhou

    Full Text Available Various toxicants, drugs and their metabolites carry potential ovarian toxicity. Ovarian follicles, the functional unit of the ovary, are susceptible to this type of damage at all stages of their development. However, despite of the large scale of potential negative impacts, assays that study ovarian toxicity are limited. Exposure of cultured ovarian follicles to toxicants of interest served as an important tool for evaluation of toxic effects for decades. Mouse follicles cultured on the bottom of a culture dish continue to serve an important approach for mechanistic studies. In this paper, we demonstrated the usefulness of a hydrogel based 3-dimensional (3D mouse ovarian follicle culture as a tool to study ovarian toxicity in a different setup. The 3D in vitro culture, based on fibrin alginate interpenetrating network (FA-IPN, preserves the architecture of the ovarian follicle and physiological structure-function relationship. We applied the novel 3D high-throughput (HTP in vitro ovarian follicle culture system to study the ovotoxic effects of an anti-cancer drug, Doxorobucin (DXR. The fibrin component in the system is degraded by plasmin and appears as a clear circle around the encapsulated follicle. The degradation area of the follicle is strongly correlated with follicle survival and growth. To analyze fibrin degradation in a high throughput manner, we created a custom MATLAB® code that converts brightfield micrographs of follicles encapsulated in FA-IPN to binary images, followed by image analysis. We did not observe any significant difference between manually processed images to the automated MATLAB® method, thereby confirming that the automated program is suitable to measure fibrin degradation to evaluate follicle health. The cultured follicles were treated with DXR at concentrations ranging from 0.005 nM to 200 nM, corresponding to the therapeutic plasma levels of DXR in patients. Follicles treated with DXR demonstrated decreased

  4. High-throughput siRNA screening applied to the ubiquitin-proteasome system

    DEFF Research Database (Denmark)

    Poulsen, Esben Guldahl; Nielsen, Sofie V.; Pietras, Elin J.

    2016-01-01

    The ubiquitin-proteasome system is the major pathway for intracellular protein degradation in eukaryotic cells. Due to the large number of genes dedicated to the ubiquitin-proteasome system, mapping degradation pathways for short lived proteins is a daunting task, in particular in mammalian cells...

  5. Analysis of high-throughput plant image data with the information system IAP

    Directory of Open Access Journals (Sweden)

    Klukas Christian

    2012-06-01

    Full Text Available This work presents a sophisticated information system, the Integrated Analysis Platform (IAP, an approach supporting large-scale image analysis for different species and imaging systems. In its current form, IAP supports the investigation of Maize, Barley and Arabidopsis plants based on images obtained in different spectra.

  6. Systems biology definition of the core proteome of metabolism and expression is consistent with high-throughput data.

    Science.gov (United States)

    Yang, Laurence; Tan, Justin; O'Brien, Edward J; Monk, Jonathan M; Kim, Donghyuk; Li, Howard J; Charusanti, Pep; Ebrahim, Ali; Lloyd, Colton J; Yurkovich, James T; Du, Bin; Dräger, Andreas; Thomas, Alex; Sun, Yuekai; Saunders, Michael A; Palsson, Bernhard O

    2015-08-25

    Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma genitalium). Based on transcriptomics data across environmental and genetic backgrounds, the systems biology core proteome is significantly enriched in nondifferentially expressed genes and depleted in differentially expressed genes. Compared with the noncore, core gene expression levels are also similar across genetic backgrounds (two times higher Spearman rank correlation) and exhibit significantly more complex transcriptional and posttranscriptional regulatory features (40% more transcription start sites per gene, 22% longer 5'UTR). Thus, genome-scale systems biology approaches rigorously identify a functional core proteome needed to support growth. This framework, validated by using high-throughput datasets, facilitates a mechanistic understanding of systems-level core proteome function through in silico models; it de facto defines a paleome.

  7. A high-throughput qPCR system for simultaneous quantitative detection of dairy Lactococcus lactis and Leuconostoc bacteriophages

    DEFF Research Database (Denmark)

    Muhammed, Musemma Kedir; Krych, Lukasz; Nielsen, Dennis Sandris

    2017-01-01

    simultaneous quantitative detection of Lc. lactis 936 (now SK1virus), P335, c2 (now C2virus) and Leuconostoc phage groups. Component assays are designed to have high efficiencies and nearly the same dynamic detection ranges, i.e., from 1.1 x 105 to 1.1 x 101 phage genomes per reaction, which corresponds to 9 x......Simultaneous quantitative detection of Lactococcus (Lc.) lactis and Leuconostoc species bacteriophages (phages) has not been reported in dairies using undefined mixed-strain DL-starters, probably due to the lack of applicable methods. We optimized a high-throughput qPCR system that allows...... 107 to 9 x 103 phage particles mL-1 without any additional up-concentrating steps. The amplification efficiencies of the corresponding assays were 100.1±2.6, 98.7±2.3, 101.0±2.3 and 96.2±6.2. The qPCR system was tested on samples obtained from a dairy plant that employed traditional mother...

  8. High-Throughput Automatic Training System for Odor-Based Learned Behaviors in Head-Fixed Mice

    Directory of Open Access Journals (Sweden)

    Zhe Han

    2018-02-01

    Full Text Available Understanding neuronal mechanisms of learned behaviors requires efficient behavioral assays. We designed a high-throughput automatic training system (HATS for olfactory behaviors in head-fixed mice. The hardware and software were constructed to enable automatic training with minimal human intervention. The integrated system was composed of customized 3D-printing supporting components, an odor-delivery unit with fast response, Arduino based hardware-controlling and data-acquisition unit. Furthermore, the customized software was designed to enable automatic training in all training phases, including lick-teaching, shaping and learning. Using HATS, we trained mice to perform delayed non-match to sample (DNMS, delayed paired association (DPA, Go/No-go (GNG, and GNG reversal tasks. These tasks probed cognitive functions including sensory discrimination, working memory, decision making and cognitive flexibility. Mice reached stable levels of performance within several days in the tasks. HATS enabled an experimenter to train eight mice simultaneously, therefore greatly enhanced the experimental efficiency. Combined with causal perturbation and activity recording techniques, HATS can greatly facilitate our understanding of the neural-circuitry mechanisms underlying learned behaviors.

  9. High-Throughput Study of Diffusion and Phase Transformation Kinetics of Magnesium-Based Systems for Automotive Cast Magnesium Alloys

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Alan A [The Ohio State Univ., Columbus, OH (United States); Zhao, Ji-Cheng [The Ohio State Univ., Columbus, OH (United States); Riggi, Adrienne [National Energy Technology Lab. (NETL), Morgantown, WV (United States); Joost, William [US Dept. of Energy, Washington, DC (United States)

    2017-10-02

    The objective of the proposed study is to establish a scientific foundation on kinetic modeling of diffusion, phase precipitation, and casting/solidification, in order to accelerate the design and optimization of cast magnesium (Mg) alloys for weight reduction of U.S. automotive fleet. The team has performed the following tasks: 1) study diffusion kinetics of various Mg-containing binary systems using high-throughput diffusion multiples to establish reliable diffusivity and mobility databases for the Mg-aluminum (Al)-zinc (Zn)-tin (Sn)-calcium (Ca)-strontium (Sr)-manganese (Mn) systems; 2) study the precipitation kinetics (nucleation, growth and coarsening) using both innovative dual-anneal diffusion multiples and cast model alloys to provide large amounts of kinetic data (including interfacial energy) and microstructure atlases to enable implementation of the Kampmann-Wagner numerical model to simulate phase transformation kinetics of non-spherical/non-cuboidal precipitates in Mg alloys; 3) implement a micromodel to take into account back diffusion in the solid phase in order to predict microstructure and microsegregation in multicomponent Mg alloys during dendritic solidification especially under high pressure die-casting (HPDC) conditions; and, 4) widely disseminate the data, knowledge and information using the Materials Genome Initiative infrastructure (http://www.mgidata.org) as well as publications and digital data sharing to enable researchers to identify new pathways/routes to better cast Mg alloys.

  10. Inventory management in the environment and the theory of constraints management accounting system throughput accounting

    OpenAIRE

    Elsukova Tatiana Vasilevna

    2014-01-01

    This article analyzes the techniques and methods of inventory management company with the information of a management accounting system based on the principles of the theory of constraints, both financial and non-financial.

  11. Inventory management in the environment and the theory of constraints management accounting system throughput accounting

    Directory of Open Access Journals (Sweden)

    Elsukova Tatiana Vasilevna

    2014-02-01

    Full Text Available This article analyzes the techniques and methods of inventory management company with the information of a management accounting system based on the principles of the theory of constraints, both financial and non-financial.

  12. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...

  13. High throughput photo-oxidations in a packed bed reactor system.

    Science.gov (United States)

    Kong, Caleb J; Fisher, Daniel; Desai, Bimbisar K; Yang, Yuan; Ahmad, Saeed; Belecki, Katherine; Gupton, B Frank

    2017-12-01

    The efficiency gains produced by continuous-flow systems in conducting photochemical transformations have been extensively demonstrated. Recently, these systems have been used in developing safe and efficient methods for photo-oxidations using singlet oxygen generated by photosensitizers. Much of the previous work has focused on the use of homogeneous photocatalysts. The development of a unique, packed-bed photoreactor system using immobilized rose bengal expands these capabilities as this robust photocatalyst allows access to and elaboration from these highly useful building blocks without the need for further purification. With this platform we were able to demonstrate a wide scope of singlet oxygen ene, [4+2] cycloadditions and heteroatom oxidations. Furthermore, we applied this method as a strategic element in the synthesis of the high-volume antimalarial artemisinin. Copyright © 2017. Published by Elsevier Ltd.

  14. Automated high-throughput flow-through real-time diagnostic system

    Science.gov (United States)

    Regan, John Frederick

    2012-10-30

    An automated real-time flow-through system capable of processing multiple samples in an asynchronous, simultaneous, and parallel fashion for nucleic acid extraction and purification, followed by assay assembly, genetic amplification, multiplex detection, analysis, and decontamination. The system is able to hold and access an unlimited number of fluorescent reagents that may be used to screen samples for the presence of specific sequences. The apparatus works by associating extracted and purified sample with a series of reagent plugs that have been formed in a flow channel and delivered to a flow-through real-time amplification detector that has a multiplicity of optical windows, to which the sample-reagent plugs are placed in an operative position. The diagnostic apparatus includes sample multi-position valves, a master sample multi-position valve, a master reagent multi-position valve, reagent multi-position valves, and an optical amplification/detection system.

  15. High-throughput automated system for statistical biosensing employing microcantilevers arrays

    DEFF Research Database (Denmark)

    Bosco, Filippo; Chen, Ching H.; Hwu, En T.

    2011-01-01

    In this paper we present a completely new and fully automated system for parallel microcantilever-based biosensing. Our platform is able to monitor simultaneously the change of resonance frequency (dynamic mode), of deflection (static mode), and of surface roughness of hundreds of cantilevers...... in a very short time over multiple biochemical reactions. We have proven that our system is capable to measure 900 independent microsensors in less than a second. Here, we report statistical biosensing results performed over a haptens-antibody assay, where complete characterization of the biochemical...

  16. A novel high-throughput drip-flow system to grow autotrophic biofilms of contrasting diversities

    DEFF Research Database (Denmark)

    Kinnunen, Marta; Dechesne, Arnaud; Albrechtsen, Hans-Jørgen

    The impact of community diversity on the functioning and assembly of microbial systems remains a central questions in microbial ecology. This question is often addressed by either combining a few cultures without necessarily a history of coexistence, or by using environmental communities, which a...

  17. On the throughput of cognitive radio MIMO systems assisted with UAV relays

    KAUST Repository

    Sboui, Lokman; Ghazzai, Hakim; Rezki, Zouheir; Alouini, Mohamed-Slim

    2017-01-01

    We analyze the achievable rates of a cognitive radio MIMO system assisted by an unmanned aerial vehicle (UAV) relay. The primary user (PU) and the secondary user (SU) aim to communicate to the closest primary base station (BS) via a multi

  18. High-throughput live-imaging of embryos in microwell arrays using a modular specimen mounting system.

    Science.gov (United States)

    Donoughe, Seth; Kim, Chiyoung; Extavour, Cassandra G

    2018-04-30

    High-throughput live-imaging of embryos is an essential technique in developmental biology, but it is difficult and costly to mount and image embryos in consistent conditions. Here, we present OMMAwell, a simple, reusable device to easily mount dozens of embryos in arrays of agarose microwells with customizable dimensions and spacing. OMMAwell can be configured to mount specimens for upright or inverted microscopes, and includes a reservoir to hold live-imaging medium to maintain constant moisture and osmolarity of specimens during time-lapse imaging. All device components can be fabricated by cutting pieces from a sheet of acrylic using a laser cutter or by making them with a 3D printer. We demonstrate how to design a custom mold and use it to live-image dozens of embryos at a time. We include descriptions, schematics, and design files for 13 additional molds for nine animal species, including most major traditional laboratory models and a number of emerging model systems. Finally, we provide instructions for researchers to customize OMMAwell inserts for embryos or tissues not described herein. © 2018. Published by The Company of Biologists Ltd.

  19. ImmuneDB: a system for the analysis and exploration of high-throughput adaptive immune receptor sequencing data.

    Science.gov (United States)

    Rosenfeld, Aaron M; Meng, Wenzhao; Luning Prak, Eline T; Hershberg, Uri

    2017-01-15

    As high-throughput sequencing of B cells becomes more common, the need for tools to analyze the large quantity of data also increases. This article introduces ImmuneDB, a system for analyzing vast amounts of heavy chain variable region sequences and exploring the resulting data. It can take as input raw FASTA/FASTQ data, identify genes, determine clones, construct lineages, as well as provide information such as selection pressure and mutation analysis. It uses an industry leading database, MySQL, to provide fast analysis and avoid the complexities of using error prone flat-files. ImmuneDB is freely available at http://immunedb.comA demo of the ImmuneDB web interface is available at: http://immunedb.com/demo CONTACT: Uh25@drexel.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Novel high-throughput screening system for identifying STAT3-SH2 antagonists

    International Nuclear Information System (INIS)

    Uehara, Yutaka; Mochizuki, Masato; Matsuno, Kenji; Haino, Takeharu; Asai, Akira

    2009-01-01

    Constitutive activation of the oncogenic transcription factor STAT3 frequently occurs in various human malignancies. STAT3 activation involves dimerization via intermolecular pTyr-SH2 interaction. Thus, antagonizing this interaction is a feasible approach to inhibit STAT3 activation for cancer therapy. In order to identify selective STAT3 inhibitors, we developed a biochemical HTS system based on AlphaScreen technology, which measures the abilities of test compounds to antagonize pTyr-SH2 interactions. We screened our chemical libraries using this system and identified 5,15-diphenylporphyrin (5,15-DPP) as a selective STAT3-SH2 antagonist. Selective inhibition of STAT3 nuclear translocation and DNA biding activity was observed in cells treated with 5,15-DPP. IL-6-dependent dimerization of STAT3, c-myc promoter binding and c-myc protein expression were all suppressed by 5,15-DPP, whereas no decrement in either expression or phosphorylation level of STAT3 was observed. Thus, the HTS assay system represented herein may be useful for identifying novel STAT3-SH2 antagonists.

  1. Structural, dielectric and ferroelectric properties of (Bi,Na)TiO{sub 3}–BaTiO{sub 3} system studied by high throughput screening

    Energy Technology Data Exchange (ETDEWEB)

    Hayden, Brian E. [Ilika Technologies Plc., Kenneth Dibben House, Enterprise Road, University of Southampton Science Park, Chilworth, Southampton SO16 7NS (United Kingdom); Department of Chemistry, University of Southampton, Highfield, Southampton SO17 1BJ (United Kingdom); Yakovlev, Sergey, E-mail: sergey.yakovlev@ilika.com [Ilika Technologies Plc., Kenneth Dibben House, Enterprise Road, University of Southampton Science Park, Chilworth, Southampton SO16 7NS (United Kingdom)

    2016-03-31

    Thin-film materials libraries of the Bi{sub 2}O{sub 3}–Na{sub 2}O–TiO{sub 2}–BaO system in a broad composition range have been deposited in ultra-high vacuum from elemental evaporation sources and an oxygen plasma source. A high throughput approach was used for systematic compositional and structural characterization and the screening of the dielectric and ferroelectric properties. The perovskite (Bi,Na)TiO{sub 3}–BaTiO{sub 3} phase with a Ba concentration near the morphotropic phase boundary (ca. 6 at.%) exhibited a relative dielectric permittivity of 180, a loss tangent of 0.04 and remnant polarization of 19 μC/cm{sup 2}. Compared to published data, observed remnant polarization is close to that known for epitaxially grown films but higher than the values reported for polycrystalline films. The high throughput methodology and systematic nature of the study allowed us to establish the composition boundaries of the phase with optimal dielectric and ferroelectric characteristics. - Highlights: • Bi{sub 2}O{sub 3}–Na{sub 2}O–TiO{sub 2}–BaO high throughput materials library was deposited using PVD method. • Materials were processed from individual molecular beam epitaxy sources of elements. • High throughput approach was used for structural, dielectric and ferroelectric study. • Composition boundaries of perovskite compounds with optimum properties are reported.

  2. Structural, dielectric and ferroelectric properties of (Bi,Na)TiO3–BaTiO3 system studied by high throughput screening

    International Nuclear Information System (INIS)

    Hayden, Brian E.; Yakovlev, Sergey

    2016-01-01

    Thin-film materials libraries of the Bi 2 O 3 –Na 2 O–TiO 2 –BaO system in a broad composition range have been deposited in ultra-high vacuum from elemental evaporation sources and an oxygen plasma source. A high throughput approach was used for systematic compositional and structural characterization and the screening of the dielectric and ferroelectric properties. The perovskite (Bi,Na)TiO 3 –BaTiO 3 phase with a Ba concentration near the morphotropic phase boundary (ca. 6 at.%) exhibited a relative dielectric permittivity of 180, a loss tangent of 0.04 and remnant polarization of 19 μC/cm 2 . Compared to published data, observed remnant polarization is close to that known for epitaxially grown films but higher than the values reported for polycrystalline films. The high throughput methodology and systematic nature of the study allowed us to establish the composition boundaries of the phase with optimal dielectric and ferroelectric characteristics. - Highlights: • Bi 2 O 3 –Na 2 O–TiO 2 –BaO high throughput materials library was deposited using PVD method. • Materials were processed from individual molecular beam epitaxy sources of elements. • High throughput approach was used for structural, dielectric and ferroelectric study. • Composition boundaries of perovskite compounds with optimum properties are reported.

  3. Multichannel microscale system for high throughput preparative separation with comprehensive collection and analysis

    Energy Technology Data Exchange (ETDEWEB)

    Karger, Barry L.; Kotler, Lev; Foret, Frantisek; Minarik, Marek; Kleparnik, Karel

    2003-12-09

    A modular multiple lane or capillary electrophoresis (chromatography) system that permits automated parallel separation and comprehensive collection of all fractions from samples in all lanes or columns, with the option of further on-line automated sample fraction analysis, is disclosed. Preferably, fractions are collected in a multi-well fraction collection unit, or plate (40). The multi-well collection plate (40) is preferably made of a solvent permeable gel, most preferably a hydrophilic, polymeric gel such as agarose or cross-linked polyacrylamide.

  4. An integrated multiple capillary array electrophoresis system for high-throughput DNA sequencing

    Energy Technology Data Exchange (ETDEWEB)

    Lu, X.

    1998-03-27

    A capillary array electrophoresis system was chosen to perform DNA sequencing because of several advantages such as rapid heat dissipation, multiplexing capabilities, gel matrix filling simplicity, and the mature nature of the associated manufacturing technologies. There are two major concerns for the multiple capillary systems. One concern is inter-capillary cross-talk, and the other concern is excitation and detection efficiency. Cross-talk is eliminated through proper optical coupling, good focusing and immersing capillary array into index matching fluid. A side-entry excitation scheme with orthogonal detection was established for large capillary array. Two 100 capillary array formats were used for DNA sequencing. One format is cylindrical capillary with 150 {micro}m o.d., 75 {micro}m i.d and the other format is square capillary with 300 {micro}m out edge and 75 {micro}m inner edge. This project is focused on the development of excitation and detection of DNA as well as performing DNA sequencing. The DNA injection schemes are discussed for the cases of single and bundled capillaries. An individual sampling device was designed. The base-calling was performed for a capillary from the capillary array with the accuracy of 98%.

  5. μTAS (micro total analysis systems) for the high-throughput measurement of nanomaterial solubility

    International Nuclear Information System (INIS)

    Tantra, R; Jarman, J

    2013-01-01

    There is a consensus in the nanoecotoxicology community that better analytical tools i.e. faster and more accurate ones, are needed for the physicochemical characterisation of nanomaterials in environmentally/biologically relevant media. In this study, we introduce the concept of μTAS (Micro Total Analysis Systems), which was a term coined to encapsulate the integration of laboratory processes on a single microchip. Our focus here is on the use of a capillary electrophoresis (CE) with conductivity detection microchip and how this may be used for the measurement of dissolution of metal oxide nanomaterials. Our preliminary results clearly show promise in that the device is able to: a) measure ionic zinc in various ecotox media with high selectivity b) track the dynamic dissolution events of zinc oxide (ZnO) nanomaterial when dispersed in fish medium.

  6. Throughput of Cellular Systems with Conferencing Mobiles and Cooperative Base Stations

    Directory of Open Access Journals (Sweden)

    Somekh O

    2008-01-01

    Full Text Available This paper considers an enhancement to multicell processing for the uplink of a cellular system, whereby the mobile stations are allowed to exchange messages on orthogonal channels of fixed capacity (conferencing. Both conferencing among mobile stations in different cells and in the same cell (inter- and intracell conferencing, resp. are studied. For both cases, it is shown that a rate-splitting transmission strategy, where part of the message is exchanged on the conferencing channels and then transmitted cooperatively to the base stations, is capacity achieving for sufficiently large conferencing capacity. In case of intercell conferencing, this strategy performs convolutional pre-equalization of the signal encoding the common messages in the spatial domain, where the number of taps of the finite-impulse response equalizer depends on the number of conferencing rounds. Analysis in the low signal-to-noise ratio regime and numerical results validate the advantages of conferencing as a complementary technology to multicell processing.

  7. Characterization of microbial biofilms in a thermophilic biogas system by high-throughput metagenome sequencing.

    Science.gov (United States)

    Rademacher, Antje; Zakrzewski, Martha; Schlüter, Andreas; Schönberg, Mandy; Szczepanowski, Rafael; Goesmann, Alexander; Pühler, Alfred; Klocke, Michael

    2012-03-01

    DNAs of two biofilms of a thermophilic two-phase leach-bed biogas reactor fed with rye silage and winter barley straw were sequenced by 454-pyrosequencing technology to assess the biofilm-based microbial community and their genetic potential for anaerobic digestion. The studied biofilms matured on the surface of the substrates in the hydrolysis reactor (HR) and on the packing in the anaerobic filter reactor (AF). The classification of metagenome reads showed Clostridium as most prevalent bacteria in the HR, indicating a predominant role for plant material digestion. Notably, insights into the genetic potential of plant-degrading bacteria were determined as well as further bacterial groups, which may assist Clostridium in carbohydrate degradation. Methanosarcina and Methanothermobacter were determined as most prevalent methanogenic archaea. In consequence, the biofilm-based methanogenesis in this system might be driven by the hydrogenotrophic pathway but also by the aceticlastic methanogenesis depending on metabolite concentrations such as the acetic acid concentration. Moreover, bacteria, which are capable of acetate oxidation in syntrophic interaction with methanogens, were also predicted. Finally, the metagenome analysis unveiled a large number of reads with unidentified microbial origin, indicating that the anaerobic degradation process may also be conducted by up to now unknown species. © 2011 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.

  8. High Throughput Phenotyping of Blueberry Bush Morphological Traits Using Unmanned Aerial Systems

    Directory of Open Access Journals (Sweden)

    Aaron Patrick

    2017-12-01

    Full Text Available Phenotyping morphological traits of blueberry bushes in the field is important for selecting genotypes that are easily harvested by mechanical harvesters. Morphological data can also be used to assess the effects of crop treatments such as plant growth regulators, fertilizers, and environmental conditions. This paper investigates the feasibility and accuracy of an inexpensive unmanned aerial system in determining the morphological characteristics of blueberry bushes. Color images collected by a quadcopter are processed into three-dimensional point clouds via structure from motion algorithms. Bush height, extents, canopy area, and volume, in addition to crown diameter and width, are derived and referenced to ground truth. In an experimental farm, twenty-five bushes were imaged by a quadcopter. Height and width dimensions achieved a mean absolute error of 9.85 cm before and 5.82 cm after systematic under-estimation correction. Strong correlation was found between manual and image derived bush volumes and their traditional growth indices. Hedgerows of three Southern Highbush varieties were imaged at a commercial farm to extract five morphological features (base angle, blockiness, crown percent height, crown ratio, and vegetation ratio associated with cultivation and machine harvestability. The bushes were found to be partially separable by multivariate analysis. The methodology developed from this study is not only valuable for plant breeders to screen genotypes with bush morphological traits that are suitable for machine harvest, but can also aid producers in crop management such as pruning and plot layout organization.

  9. New approach to exploit optimally the PV array output energy by maximizing the discharge rate of a directly-coupled photovoltaic water pumping system (DC/PVPS)

    International Nuclear Information System (INIS)

    Boutelhig, Azzedine; Hadj Arab, Amar; Hanini, Salah

    2016-01-01

    Highlights: • Mismatches on a designed d-c PV pumping system have been highlighted. • A new approach predicting the maximal discharge has been developed. • The approach has been discussed versus its linearity coefficient. • The approach effectiveness has been investigated and approved. • Theoretical and experimental obtained values have been compared and approved. - Abstract: A directly-coupled photovoltaic water pumping system (DC/PVPS) is generally designed by considering the worst month conditions on lowest daylight-hours, the maximum monthly daily required water volume and tank to store the excess water. In case of absence of hydraulic storage (water tank) or it is not enough dimensioned, the extra amount of pumped water is lost or is not reasonably used, when the system is operated on full daylight-hour. Beside that the extra amount of energy, which might be produced by the PV generator, is not exploited, when the system is operated only during a specified period-time needed to satisfy the demand. Beyond the accurate design that satisfying the end-user, a new approach has been developed as target to exploit maximally the PV array energy production, by maximizing the discharge rate of the system. The methodology consists of approaching maximally the demanded energy to the supplied energy on full operating day. Based on the demand/supply energy condition, the approach has been developed, upon the PV array and the pump performance models. The issued approach predicts the maximum delivery capacity of the system on monthly daily water volumes versus the monthly daily averages of solar irradiation, previously recorded. Its efficacy has been investigated and discussed according to the estimated and experimental values of its linearity coefficient, following the characterization tests of a designed system, carried out at our pumping test facility in Ghardaia (Algeria). The new theoretically and experimentally obtained flow-rates fit well, except

  10. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  11. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  12. High-throughput continuous cryopump

    International Nuclear Information System (INIS)

    Foster, C.A.

    1986-01-01

    A cryopump with a unique method of regeneration which allows continuous operation at high throughput has been constructed and tested. Deuterium was pumped continuously at a throughput of 30 Torr.L/s at a speed of 2000 L/s and a compression ratio of 200. Argon was pumped at a throughput of 60 Torr.L/s at a speed of 1275 L/s. To produce continuous operation of the pump, a method of regeneration that does not thermally cycle the pump is employed. A small chamber (the ''snail'') passes over the pumping surface and removes the frost from it either by mechanical action with a scraper or by local heating. The material removed is topologically in a secondary vacuum system with low conductance into the primary vacuum; thus, the exhaust can be pumped at pressures up to an effective compression ratio determined by the ratio of the pumping speed to the leakage conductance of the snail. The pump, which is all-metal-sealed and dry and which regenerates every 60 s, would be an ideal system for pumping tritium. Potential fusion applications are for mpmp limiters, for repeating pneumatic pellet injection lines, and for the centrifuge pellet injector spin tank, all of which will require pumping tritium at high throughput. Industrial applications requiring ultraclean pumping of corrosive gases at high throughput, such as the reactive ion etch semiconductor process, may also be feasible

  13. Throughput and Delay Analysis of HARQ with Code Combining over Double Rayleigh Fading Channels

    KAUST Repository

    Chelli, Ali

    2018-01-15

    This paper proposes the use of hybrid automatic repeat request (HARQ) with code combining (HARQ-CC) to offer reliable communications over double Rayleigh channels. The double Rayleigh fading channel is of particular interest to vehicle-to-vehicle communication systems as well as amplify-and-forward relaying and keyhole channels. This work studies the performance of HARQ-CC over double Rayleigh channels from an information theoretic perspective. Analytical approximations are derived for the $\\\\epsilon$-outage capacity, the average number of transmissions, and the throughput of HARQ-CC. Moreover, we evaluate the delay experienced by Poisson arriving packets for HARQ-CC. We provide analytical expressions for the average waiting time, the packets sojourn time, the average consumed power, and the energy efficiency. In our investigation, we take into account the impact of imperfect feedback on different performance metrics. Additionally, we explore the tradeoff between energy efficiency and the throughput. The proposed scheme is shown to maintain the outage probability below a specified threshold $\\\\epsilon$ which ensures the link reliability. Meanwhile, HARQ-CC adapts implicitly the transmission rate to the channel conditions such that the throughput is maximized. Our results demonstrate that HARQ-CC allows improving the achievable communication rate compared to fixed time diversity schemes. To maximize the throughput of HARQ-CC, the rate per HARQ round should be less than the rate required to meet the outage constraint. Our investigation of the performance of HARQ-CC over Rayleigh and double Rayleigh channels shows that double Rayleigh channels have a higher severity of fading and result in a larger degradation of the throughput. Our analysis reveals that HARQ with incremental redundancy (HARQ-IR) achieves a larger throughput compared to HARQ-CC, while HARQ-CC is simpler to implement, has a lower decoding

  14. High throughput synthesis and characterization of the PbnNb2O5+n (0.5 < n < 4.1) system on a single chip

    International Nuclear Information System (INIS)

    Mirsaneh, Mehdi; Hayden, Brian E.; Miao Shu; Pokorny, Jan; Perini, Steve; Furman, Eugene; Lanagan, Michael T.; Ubic, Rick; Reaney, Ian M.

    2011-01-01

    Most high throughput studies focus on assessing the effect of composition within a single known fundamental structure type, such as perovskite. Here we demonstrate how high throughput synthesis and screening can be used to establish structure-property relations in the PbO-Nb 2 O 5 system, for which eight distinct fundamental structure types are known to exist. PbNb 4 O 11 , PbNb 2 O 6 and pyrochlore could be easily distinguished by X-ray diffraction (XRD). However, XRD was insensitive to distortions of the pyrochlore structure and instead Raman spectroscopy was utilized to determine changes in symmetry from cubic to rhombohedral as the PbO concentration increased. High throughput screening of the capacitance revealed permittivity (ε r ) maxima in the PbNb 4 O 11 (ε r = 700) and cubic pyrochlore phases (ε r = 450). The ε r of PbNb 4 O 11 has not to date been reported but the value for cubic pyrochlore is higher than that reported for bulk ceramics (ε r = 270). Initial high electric field studies also revealed exceptionally high tunability (four times that reported for bismuth zinc niobate-based pyrochlores) of the capacitance in the pyrochlore phase.

  15. Maximizers versus satisficers

    OpenAIRE

    Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...

  16. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  17. Cell-Based Reporter System for High-Throughput Screening of MicroRNA Pathway Inhibitors and Its Limitations

    Czech Academy of Sciences Publication Activity Database

    Bruštíková, Kateřina; Sedlák, David; Kubíková, Jana; Škuta, Ctibor; Šolcová, Kateřina; Malík, Radek; Bartůněk, Petr; Svoboda, Petr

    2018-01-01

    Roč. 9 (2018), č. článku 45. ISSN 1664-8021 R&D Projects: GA ČR GA13-29531S; GA MŠk LO1220; GA MŠk LM2015063; GA MŠk LM2011022 Institutional support: RVO:68378050 Keywords : miRNA * high-throughput screening * miR-30 * let-7 * Argonaute Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.789, year: 2016

  18. High-Throughput Fabrication of Nanocone Substrates through Polymer Injection Moulding For SERS Analysis in Microfluidic Systems

    DEFF Research Database (Denmark)

    Viehrig, Marlitt; Matteucci, Marco; Thilsted, Anil H.

    analysis. Metal-capped silicon nanopillars, fabricated through a maskless ion etch, are state-of-the-art for on-chip SERS substrates. A dense cluster of high aspect ratio polymer nanocones was achieved by using high-throughput polymer injection moulding over a large area replicating a silicon nanopillar...... structure. Gold-capped polymer nanocones display similar SERS sensitivity as silicon nanopillars, while being easily integrable into a microfluidic chips....

  19. Energy resolution and throughput of a new real time digital pulse processing system for x-ray and gamma ray semiconductor detectors

    International Nuclear Information System (INIS)

    Abbene, L; Gerardi, G; Raso, G; Brai, M; Principato, F; Basile, S

    2013-01-01

    New generation spectroscopy systems have advanced towards digital pulse processing (DPP) approaches. DPP systems, based on direct digitizing and processing of detector signals, have recently been favoured over analog pulse processing electronics, ensuring higher flexibility, stability, lower dead time, higher throughput and better spectroscopic performance. In this work, we present the performance of a new real time DPP system for X-ray and gamma ray semiconductor detectors. The system is based on a commercial digitizer equipped with a custom DPP firmware, developed by our group, for on-line pulse shape and height analysis. X-ray and gamma ray spectra measurements with cadmium telluride (CdTe) and germanium (Ge) detectors, coupled to resistive-feedback preamplifiers, highlight the excellent performance of the system both at low and high rate environments (up to 800 kcps). A comparison with a conventional analog electronics showed the better high-rate capabilities of the digital approach, in terms of energy resolution and throughput. These results make the proposed DPP system a very attractive tool for both laboratory research and for the development of advanced detection systems for high-rate-resolution spectroscopic imaging, recently proposed in diagnostic medicine, industrial imaging and security screening

  20. Multi-objective optimal reactive power dispatch to maximize power system social welfare in the presence of generalized unified power flow controller

    Directory of Open Access Journals (Sweden)

    Suresh Chintalapudi Venkata

    2015-09-01

    Full Text Available In this paper a novel non-linear optimization problem is formulated to maximize the social welfare in restructured environment with generalized unified power flow controller (GUPFC. This paper presents a methodology to optimally allocate the reactive power by minimizing voltage deviation at load buses and total transmission power losses so as to maximize the social welfare. The conventional active power generation cost function is modified by combining costs of reactive power generated by the generators, shunt capacitors and total power losses to it. The formulated objectives are optimized individually and simultaneously as multi-objective optimization problem, while satisfying equality, in-equality, practical and device operational constraints. A new optimization method, based on two stage initialization and random distribution processes is proposed to test the effectiveness of the proposed approach on IEEE-30 bus system, and the detailed analysis is carried out.

  1. The French press: a repeatable and high-throughput approach to exercising zebrafish (Danio rerio).

    Science.gov (United States)

    Usui, Takuji; Noble, Daniel W A; O'Dea, Rose E; Fangmeier, Melissa L; Lagisz, Malgorzata; Hesselson, Daniel; Nakagawa, Shinichi

    2018-01-01

    Zebrafish are increasingly used as a vertebrate model organism for various traits including swimming performance, obesity and metabolism, necessitating high-throughput protocols to generate standardized phenotypic information. Here, we propose a novel and cost-effective method for exercising zebrafish, using a coffee plunger and magnetic stirrer. To demonstrate the use of this method, we conducted a pilot experiment to show that this simple system provides repeatable estimates of maximal swim performance (intra-class correlation [ICC] = 0.34-0.41) and observe that exercise training of zebrafish on this system significantly increases their maximum swimming speed. We propose this high-throughput and reproducible system as an alternative to traditional linear chamber systems for exercising zebrafish and similarly sized fishes.

  2. Effect of suspension systems on the physiological and psychological responses to sub-maximal biking on simulated smoothand bumpy tracks.

    Science.gov (United States)

    Titlestad, John; Fairlie-Clarke, Tony; Whittaker, Arthur; Davie, Mark; Watt, Ian; Grant, Stanley

    2006-02-01

    The aim of this study was to compare the physiological and psychological responses of cyclists riding on a hard tail bicycle and on a full suspension bicycle. Twenty males participated in two series of tests. A test rig held the front axle of the bicycle steady while the rear wheel rotated against a heavy roller with bumps (or no bumps) on its surface. In the first series of tests, eight participants (age 19-27 years, body mass 65-82 kg) were tested on both the full suspension and hard tail bicycles with and without bumps fitted to the roller. The second series of test repeated the bump tests with a further six participants (age 22-31 years, body mass 74-94 kg) and also involved an investigation of familiarization effects with the final six participants (age 21-30 years, body mass 64-80 kg). Heart rate, oxygen consumption (VO(2)), rating of perceived exertion (RPE) and comfort were recorded during 10 min sub-maximal tests. Combined data for the bumps tests show that the full suspension bicycle was significantly different (P < 0.001) from the hard tail bicycle on all four measures. Oxygen consumption, heart rate and RPE were lower on average by 8.7 (s = 3.6) ml . kg(-1) . min(-1), 32.1 (s = 12.1) beats . min(-1) and 2.6 (s = 2.0) units, respectively. Comfort scores were higher (better) on average by 1.9 (s = 0.8) units. For the no bumps tests, the only statistically significant difference (P = 0.008) was in VO(2), which was lower for the hard tail bicycle by 2.2 (s = 1.7) ml . kg(-1) . min(-1). The results indicate that the full suspension bicycle provides a physiological and psychological advantage over the hard tail bicycle during simulated sub-maximal exercise on bumps.

  3. High Throughput Facility

    Data.gov (United States)

    Federal Laboratory Consortium — Argonne?s high throughput facility provides highly automated and parallel approaches to material and materials chemistry development. The facility allows scientists...

  4. Maximally multipartite entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio

    2008-06-01

    We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.

  5. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  6. Validation of a high-throughput fermentation system based on online monitoring of biomass and fluorescence in continuously shaken microtiter plates

    Directory of Open Access Journals (Sweden)

    Kensy Frank

    2009-06-01

    Full Text Available Abstract Background An advanced version of a recently reported high-throughput fermentation system with online measurement, called BioLector, and its validation is presented. The technology combines high-throughput screening and high-information content by applying online monitoring of scattered light and fluorescence intensities in continuously shaken microtiter plates. Various examples in calibration of the optical measurements, clone and media screening and promoter characterization are given. Results Bacterial and yeast biomass concentrations of up to 50 g/L cell dry weight could be linearly correlated to scattered light intensities. In media screening, the BioLector could clearly demonstrate its potential for detecting different biomass and product yields and deducing specific growth rates for quantitatively evaluating media and nutrients. Growth inhibition due to inappropriate buffer conditions could be detected by reduced growth rates and a temporary increase in NADH fluorescence. GFP served very well as reporter protein for investigating the promoter regulation under different carbon sources in yeast strains. A clone screening of 90 different GFP-expressing Hansenula polymorpha clones depicted the broad distribution of growth behavior and an even stronger distribution in GFP expression. The importance of mass transfer conditions could be demonstrated by varying filling volumes of an E. coli culture in 96 well MTP. The different filling volumes cause a deviation in the culture growth and acidification both monitored via scattered light intensities and the fluorescence of a pH indicator, respectively. Conclusion The BioLector technology is a very useful tool to perform quantitative microfermentations under engineered reaction conditions. With this technique, specific yields and rates can be directly deduced from online biomass and product concentrations, which is superior to existing technologies such as microplate readers or optode

  7. Shortcuts to adiabatic passage for the generation of a maximal Bell state and W state in an atom–cavity system

    Science.gov (United States)

    Lu, Mei; Chen, Qing-Qin

    2018-05-01

    We propose an efficient scheme to generate the maximal entangle states in an atom–cavity system between two three-level atoms in cavity quantum electronic dynamics system based on shortcuts to adiabatic passage. In the accelerate scheme, there is no need to design a time-varying coupling coefficient for the cavity. We only need to tactfully design time-dependent lasers to drive the system into the desired entangled states. Controlling the detuning between the cavity mode and lasers, we deduce a determinate analysis formula for this quantum information processing. The lasers do not need to distinguish which atom is to be affected, therefore the implementation of the experiment is simpler. The method is also generalized to generate a W state. Moreover, the accelerated program can be extended to a multi-body system and an analytical solution in a higher-dimensional system can be achieved. The influence of decoherence and variations of the parameters are discussed by numerical simulation. The results show that the maximally entangled states can be quickly prepared in a short time with high fidelity, and which are robust against both parameter fluctuations and dissipation. Our study enriches the physics and applications of multi-particle quantum entanglement preparation via shortcuts to adiabatic passage in quantum electronic dynamics.

  8. Real-Time Control System for Improved Precision and Throughput in an Ultrafast Carbon Fiber Placement Robot Using a SoC FPGA Extended Processing Platform

    Directory of Open Access Journals (Sweden)

    Gilberto Ochoa-Ruiz

    2017-01-01

    Full Text Available We present an architecture for accelerating the processing and execution of control commands in an ultrafast fiber placement robot. The system consists of a robotic arm designed by Coriolis Composites whose purpose is to move along a surface, on which composite fibers are deposed, via an independently controlled head. In first system implementation, the control commands were sent via Profibus by a PLC, limiting the reaction time and thus the precision of the fiber placement and the maximum throughput. Therefore, a custom real-time solution was imperative in order to ameliorate the performance and to meet the stringent requirements of the target industry (avionics, aeronautical systems. The solution presented in this paper is based on the use of a SoC FPGA processing platform running a real-time operating system (FreeRTOS, which has enabled an improved comamnd retrieval mechanism. The system’s placement precision was improved by a factor of 20 (from 1 mm to 0.05 mm, while the maximum achievable throughput was 1 m/s, compared to the average 30 cm/s provided by the original solution, enabling fabricating more complex and larger pieces in a significant fraction of the time.

  9. Analysis of the maximal possible grid relief from PV-peak-power impacts by using storage systems for increased self-consumption

    International Nuclear Information System (INIS)

    Moshövel, Janina; Kairies, Kai-Philipp; Magnor, Dirk; Leuthold, Matthias; Bost, Mark; Gährs, Swantje; Szczechowicz, Eva; Cramer, Moritz; Sauer, Dirk Uwe

    2015-01-01

    Highlights: • Presentation of a MATLAB battery storage model. • Development of a controlled persistence forecast management strategy. • Perfect forecast in comparison to an easy feasible persistence forecast. • More grid relief with forecast than with strategies to maximize self-consumption. - Abstract: For future energy supply systems the effects and benefits of battery storage systems in households with photovoltaic (PV) generators and the effects on distribution and transmission grids need to be identified and analyzed. The development of grid relieving management strategies for the storage system in due consideration of self-consumption is a necessary step forward in order to analyze the potential of private home battery storage systems to reduce stress on the power supply system. A MATLAB-based model of a lithium-ion storage system has been developed. The model is applicable for a wide range of PV generator sizes, different battery storage systems and diverse management strategies. In order to identify the potential of grid relieving forecast strategies, without discharging the storage into the grid, a management strategy based on persistence forecasts of solar radiation and household load demand has been implemented and analyzed. To minimize forecast uncertainties a proportional plus integral controller has been developed. The persistence forecast management strategy is applicable in real-life PV-battery-systems and due to the simple forecast it is easy to equip existing systems with such a management system with only low effort. As a result it will be shown that a storage system management based on forecasts has a significantly higher potential to relieve the grid than a system that only maximizes self-consumption as it is usually used nowadays. Besides, such a management strategy is able to unload the grid more than a static power reduction to 70% of the nominal power rating according to the current German Renewable Energy Sources Act (EEG). At the

  10. Why mutual helping in most natural systems is neither conflict-free nor based on maximal conflict.

    Science.gov (United States)

    Bshary, Redouan; Zuberbühler, Klaus; van Schaik, Carel P

    2016-02-05

    Mutual helping for direct benefits can be explained by various game theoretical models, which differ mainly in terms of the underlying conflict of interest between two partners. Conflict is minimal if helping is self-serving and the partner benefits as a by-product. In contrast, conflict is maximal if partners are in a prisoner's dilemma with both having the pay-off-dominant option of not returning the other's investment. Here, we provide evolutionary and ecological arguments for why these two extremes are often unstable under natural conditions and propose that interactions with intermediate levels of conflict are frequent evolutionary endpoints. We argue that by-product helping is prone to becoming an asymmetric investment game since even small variation in by-product benefits will lead to the evolution of partner choice, leading to investments by the chosen class. Second, iterated prisoner's dilemmas tend to take place in stable social groups where the fitness of partners is interdependent, with the effect that a certain level of helping is self-serving. In sum, intermediate levels of mutual helping are expected in nature, while efficient partner monitoring may allow reaching higher levels. © 2016 The Author(s).

  11. Optimization of cell line development in the GS-CHO expression system using a high-throughput, single cell-based clone selection system.

    Science.gov (United States)

    Nakamura, Tsuyoshi; Omasa, Takeshi

    2015-09-01

    Therapeutic antibodies are commonly produced by high-expressing, clonal and recombinant Chinese hamster ovary (CHO) cell lines. Currently, CHO cells dominate as a commercial production host because of their ease of use, established regulatory track record, and safety profile. CHO-K1SV is a suspension, protein-free-adapted CHO-K1-derived cell line employing the glutamine synthetase (GS) gene expression system (GS-CHO expression system). The selection of high-producing mammalian cell lines is a crucial step in process development for the production of therapeutic antibodies. In general, cloning by the limiting dilution method is used to isolate high-producing monoclonal CHO cells. However, the limiting dilution method is time consuming and has a low probability of monoclonality. To minimize the duration and increase the probability of obtaining high-producing clones with high monoclonality, an automated single cell-based clone selector, the ClonePix FL system, is available. In this study, we applied the high-throughput ClonePix FL system for cell line development using CHO-K1SV cells and investigated efficient conditions for single cell-based clone selection. CHO-K1SV cell growth at the pre-picking stage was improved by optimizing the formulation of semi-solid medium. The efficiency of picking and cell growth at the post-picking stage was improved by optimization of the plating time without decreasing the diversity of clones. The conditions for selection, including the medium formulation, were the most important factors for the single cell-based clone selection system to construct a high-producing CHO cell line. Copyright © 2015 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  12. Is CP violation maximal

    International Nuclear Information System (INIS)

    Gronau, M.

    1984-01-01

    Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references

  13. Prototype Systems Containing Human Cytochrome P450 for High-Throughput Real-Time Detection of DNA Damage by Compounds That Form DNA-Reactive Metabolites.

    Science.gov (United States)

    Brito Palma, Bernardo; Fisher, Charles W; Rueff, José; Kranendonk, Michel

    2016-05-16

    The formation of reactive metabolites through biotransformation is the suspected cause of many adverse drug reactions. Testing for the propensity of a drug to form reactive metabolites has increasingly become an integral part of lead-optimization strategy in drug discovery. DNA reactivity is one undesirable facet of a drug or its metabolites and can lead to increased risk of cancer and reproductive toxicity. Many drugs are metabolized by cytochromes P450 in the liver and other tissues, and these reactions can generate hard electrophiles. These hard electrophilic reactive metabolites may react with DNA and may be detected in standard in vitro genotoxicity assays; however, the majority of these assays fall short due to the use of animal-derived organ extracts that inadequately represent human metabolism. The current study describes the development of bacterial systems that efficiently detect DNA-damaging electrophilic reactive metabolites generated by human P450 biotransformation. These assays use a GFP reporter system that detects DNA damage through induction of the SOS response and a GFP reporter to control for cytotoxicity. Two human CYP1A2-competent prototypes presented here have appropriate characteristics for the detection of DNA-damaging reactive metabolites in a high-throughput manner. The advantages of this approach include a short assay time (120-180 min) with real-time measurement, sensitivity to small amounts of compound, and adaptability to a microplate format. These systems are suitable for high-throughput assays and can serve as prototypes for the development of future enhanced versions.

  14. High-throughput in vivo genotoxicity testing: an automated readout system for the somatic mutation and recombination test (SMART.

    Directory of Open Access Journals (Sweden)

    Benoit Lombardot

    Full Text Available Genotoxicity testing is an important component of toxicity assessment. As illustrated by the European registration, evaluation, authorization, and restriction of chemicals (REACH directive, it concerns all the chemicals used in industry. The commonly used in vivo mammalian tests appear to be ill adapted to tackle the large compound sets involved, due to throughput, cost, and ethical issues. The somatic mutation and recombination test (SMART represents a more scalable alternative, since it uses Drosophila, which develops faster and requires less infrastructure. Despite these advantages, the manual scoring of the hairs on Drosophila wings required for the SMART limits its usage. To overcome this limitation, we have developed an automated SMART readout. It consists of automated imaging, followed by an image analysis pipeline that measures individual wing genotoxicity scores. Finally, we have developed a wing score-based dose-dependency approach that can provide genotoxicity profiles. We have validated our method using 6 compounds, obtaining profiles almost identical to those obtained from manual measures, even for low-genotoxicity compounds such as urethane. The automated SMART, with its faster and more reliable readout, fulfills the need for a high-throughput in vivo test. The flexible imaging strategy we describe and the analysis tools we provide should facilitate the optimization and dissemination of our methods.

  15. A high-throughput reactor system for optimization of Mo–V–Nb mixed oxide catalyst composition in ethane ODH

    KAUST Repository

    Zhu, Haibo; Laveille, Paco; Rosenfeld, Devon C.; Hedhili, Mohamed N.; Basset, Jean-Marie

    2015-01-01

    75 Mo-V-Nb mixed oxide catalysts with a broad range of compositions were prepared by a simple evaporation method, and were screened for the ethane oxidative dehydrogenation (ODH) reaction. The compositions of these 75 catalysts were systematically changed by varying the Nb loading, and the Mo/V molar ratio. Characterization by XRD, XPS, H2-TPR and SEM revealed that an intimate structure is formed among the 3 components. The strong interaction among different components leads to the formation of a new phase or an "intimate structure". The dependency of conversion and selectivity on the catalyst composition was clearly demonstrated from the results of high-throughput testing. The optimized Mo-V-Nb molar composition was confirmed to be composed of a Nb content of 4-8%, a Mo content of 70-83%, and a V content of 12-25%. The enhanced catalytic performance of the mixed oxides is obviously due to the synergistic effects of the different components. The optimized compositions for ethane ODH revealed in our high-throughput tests and the structural information provided by our characterization studies can serve as the starting point for future efforts to improve the catalytic performance of Mo-V-Nb oxides. This journal is © The Royal Society of Chemistry.

  16. Bacterial Pathogens and Community Composition in Advanced Sewage Treatment Systems Revealed by Metagenomics Analysis Based on High-Throughput Sequencing

    Science.gov (United States)

    Lu, Xin; Zhang, Xu-Xiang; Wang, Zhu; Huang, Kailong; Wang, Yuan; Liang, Weigang; Tan, Yunfei; Liu, Bo; Tang, Junying

    2015-01-01

    This study used 454 pyrosequencing, Illumina high-throughput sequencing and metagenomic analysis to investigate bacterial pathogens and their potential virulence in a sewage treatment plant (STP) applying both conventional and advanced treatment processes. Pyrosequencing and Illumina sequencing consistently demonstrated that Arcobacter genus occupied over 43.42% of total abundance of potential pathogens in the STP. At species level, potential pathogens Arcobacter butzleri, Aeromonas hydrophila and Klebsiella pneumonia dominated in raw sewage, which was also confirmed by quantitative real time PCR. Illumina sequencing also revealed prevalence of various types of pathogenicity islands and virulence proteins in the STP. Most of the potential pathogens and virulence factors were eliminated in the STP, and the removal efficiency mainly depended on oxidation ditch. Compared with sand filtration, magnetic resin seemed to have higher removals in most of the potential pathogens and virulence factors. However, presence of the residual A. butzleri in the final effluent still deserves more concerns. The findings indicate that sewage acts as an important source of environmental pathogens, but STPs can effectively control their spread in the environment. Joint use of the high-throughput sequencing technologies is considered a reliable method for deep and comprehensive overview of environmental bacterial virulence. PMID:25938416

  17. Task-oriented maximally entangled states

    International Nuclear Information System (INIS)

    Agrawal, Pankaj; Pradhan, B

    2010-01-01

    We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.

  18. Maximally Entangled Multipartite States: A Brief Survey

    International Nuclear Information System (INIS)

    Enríquez, M; Wintrowicz, I; Życzkowski, K

    2016-01-01

    The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)

  19. Essential attributes identified in the design of a Laboratory Information Management System for a high throughput siRNA screening laboratory.

    Science.gov (United States)

    Grandjean, Geoffrey; Graham, Ryan; Bartholomeusz, Geoffrey

    2011-11-01

    In recent years high throughput screening operations have become a critical application in functional and translational research. Although a seemingly unmanageable amount of data is generated by these high-throughput, large-scale techniques, through careful planning, an effective Laboratory Information Management System (LIMS) can be developed and implemented in order to streamline all phases of a workflow. Just as important as data mining and analysis procedures at the end of complex processes is the tracking of individual steps of applications that generate such data. Ultimately, the use of a customized LIMS will enable users to extract meaningful results from large datasets while trusting the robustness of their assays. To illustrate the design of a custom LIMS, this practical example is provided to highlight the important aspects of the design of a LIMS to effectively modulate all aspects of an siRNA screening service. This system incorporates inventory management, control of workflow, data handling and interaction with investigators, statisticians and administrators. All these modules are regulated in a synchronous manner within the LIMS. © 2011 Bentham Science Publishers

  20. Application of high-throughput mini-bioreactor system for systematic scale-down modeling, process characterization, and control strategy development.

    Science.gov (United States)

    Janakiraman, Vijay; Kwiatkowski, Chris; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming

    2015-01-01

    High-throughput systems and processes have typically been targeted for process development and optimization in the bioprocessing industry. For process characterization, bench scale bioreactors have been the system of choice. Due to the need for performing different process conditions for multiple process parameters, the process characterization studies typically span several months and are considered time and resource intensive. In this study, we have shown the application of a high-throughput mini-bioreactor system viz. the Advanced Microscale Bioreactor (ambr15(TM) ), to perform process characterization in less than a month and develop an input control strategy. As a pre-requisite to process characterization, a scale-down model was first developed in the ambr system (15 mL) using statistical multivariate analysis techniques that showed comparability with both manufacturing scale (15,000 L) and bench scale (5 L). Volumetric sparge rates were matched between ambr and manufacturing scale, and the ambr process matched the pCO2 profiles as well as several other process and product quality parameters. The scale-down model was used to perform the process characterization DoE study and product quality results were generated. Upon comparison with DoE data from the bench scale bioreactors, similar effects of process parameters on process yield and product quality were identified between the two systems. We used the ambr data for setting action limits for the critical controlled parameters (CCPs), which were comparable to those from bench scale bioreactor data. In other words, the current work shows that the ambr15(TM) system is capable of replacing the bench scale bioreactor system for routine process development and process characterization. © 2015 American Institute of Chemical Engineers.

  1. The maximization of the productivity of aquatic plants for use in controlled ecological life support systems (CELSS)

    Science.gov (United States)

    Thompson, B. G.

    Lemna minor (common duckweed) and a Wolffia sp. were grown in submerged growth systems. Submerged growth increased the productivity/unit volume (P/UV) of the organisms and may allow these plants to be used in a controlled ecological life support system (CELSS).

  2. A mobile, high-throughput semi-automated system for testing cognition in large non-primate animal models of Huntington disease.

    Science.gov (United States)

    McBride, Sebastian D; Perentos, Nicholas; Morton, A Jennifer

    2016-05-30

    For reasons of cost and ethical concerns, models of neurodegenerative disorders such as Huntington disease (HD) are currently being developed in farm animals, as an alternative to non-human primates. Developing reliable methods of testing cognitive function is essential to determining the usefulness of such models. Nevertheless, cognitive testing of farm animal species presents a unique set of challenges. The primary aims of this study were to develop and validate a mobile operant system suitable for high throughput cognitive testing of sheep. We designed a semi-automated testing system with the capability of presenting stimuli (visual, auditory) and reward at six spatial locations. Fourteen normal sheep were used to validate the system using a two-choice visual discrimination task. Four stages of training devised to acclimatise animals to the system are also presented. All sheep progressed rapidly through the training stages, over eight sessions. All sheep learned the 2CVDT and performed at least one reversal stage. The mean number of trials the sheep took to reach criterion in the first acquisition learning was 13.9±1.5 and for the reversal learning was 19.1±1.8. This is the first mobile semi-automated operant system developed for testing cognitive function in sheep. We have designed and validated an automated operant behavioural testing system suitable for high throughput cognitive testing in sheep and other medium-sized quadrupeds, such as pigs and dogs. Sheep performance in the two-choice visual discrimination task was very similar to that reported for non-human primates and strongly supports the use of farm animals as pre-clinical models for the study of neurodegenerative diseases. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Optimization of the gas turbine-modular helium reactor using statistical methods to maximize performance without compromising system design margins

    International Nuclear Information System (INIS)

    Lommers, L.J.; Parme, L.L.; Shenoy, A.S.

    1995-07-01

    This paper describes a statistical approach for determining the impact of system performance and design uncertainties on power plant performance. The objectives of this design approach are to ensure that adequate margin is provided, that excess margin is minimized, and that full advantage can be taken of unconsumed margin. It is applicable to any thermal system in which these factors are important. The method is demonstrated using the Gas Turbine Modular Helium Reactor as an example. The quantitative approach described allows the characterization of plant performance and the specification of the system design requirements necessary to achieve the desired performance with high confidence. Performance variations due to design evolution, inservice degradation, and basic performance uncertainties are considered. The impact of all performance variabilities is combined using Monte Carlo analysis to predict the range of expected operation

  4. Rethinking school-based health centers as complex adaptive systems: maximizing opportunities for the prevention of teen pregnancy and sexually transmitted infections.

    Science.gov (United States)

    Daley, Alison Moriarty

    2012-01-01

    This article examines school-based health centers (SBHCs) as complex adaptive systems, the current gaps that exist in contraceptive access, and the potential to maximize this community resource in teen pregnancy and sexually transmitted infection (STI) prevention efforts. Adolescent pregnancy is a major public health challenge for the United States. Existing community resources need to be considered for their potential to impact teen pregnancy and STI prevention efforts. SBHCs are one such community resource to be leveraged in these efforts. They offer adolescent-friendly primary care services and are responsive to the diverse needs of the adolescents utilizing them. However, current restrictions on contraceptive availability limit the ability of SBHCs to maximize opportunities for comprehensive reproductive care and create missed opportunities for pregnancy and STI prevention. A clinical case explores the current models of health care services related to contraceptive care provided in SBHCs and the ability to meet or miss the needs of an adolescent seeking reproductive care in a SBHC.

  5. High-throughput phenotyping of large wheat breeding nurseries using unmanned aerial system, remote sensing and GIS techniques

    Science.gov (United States)

    Haghighattalab, Atena

    Wheat breeders are in a race for genetic gain to secure the future nutritional needs of a growing population. Multiple barriers exist in the acceleration of crop improvement. Emerging technologies are reducing these obstacles. Advances in genotyping technologies have significantly decreased the cost of characterizing the genetic make-up of candidate breeding lines. However, this is just part of the equation. Field-based phenotyping informs a breeder's decision as to which lines move forward in the breeding cycle. This has long been the most expensive and time-consuming, though most critical, aspect of breeding. The grand challenge remains in connecting genetic variants to observed phenotypes followed by predicting phenotypes based on the genetic composition of lines or cultivars. In this context, the current study was undertaken to investigate the utility of UAS in assessment field trials in wheat breeding programs. The major objective was to integrate remotely sensed data with geospatial analysis for high throughput phenotyping of large wheat breeding nurseries. The initial step was to develop and validate a semi-automated high-throughput phenotyping pipeline using a low-cost UAS and NIR camera, image processing, and radiometric calibration to build orthomosaic imagery and 3D models. The relationship between plot-level data (vegetation indices and height) extracted from UAS imagery and manual measurements were examined and found to have a high correlation. Data derived from UAS imagery performed as well as manual measurements while exponentially increasing the amount of data available. The high-resolution, high-temporal HTP data extracted from this pipeline offered the opportunity to develop a within season grain yield prediction model. Due to the variety in genotypes and environmental conditions, breeding trials are inherently spatial in nature and vary non-randomly across the field. This makes geographically weighted regression models a good choice as a

  6. Performance analysis for optimum transmission and comparison with maximal ratio transmission for MIMO systems with cochannel interference

    Directory of Open Access Journals (Sweden)

    Lin Sheng-Chou

    2011-01-01

    Full Text Available Abstract This article presents the performance analysis of multiple-input/multiple-output (MIMO systems with quadrature amplitude modulation (QAM transmission in the presence of cochannel interference (CCI in nonfading and flat Rayleigh fading environments. The use of optimum transmission (OT and maximum ratio transmission (MRT is considered and compared. In addition to determining precise results for the performance of QAM in the presence of CCI, it is our another aim in this article to examine the validity of the Gaussian interference model in the MRT-based systems. Nyquist pulse shaping and the effects of cross-channel intersymbol interference produced by CCI due to random symbol of the interfering signals are considered in the precise interference model. The error probability for each fading channel is estimated fast and accurately using Gauss quadrature rules which can approximate the probability density function (pdf of the output residual interference. The results of this article indicate that Gaussian interference model may overestimate the effects of interference, particularly, for high-order MRT-based MIMO systems over fading channels. In addition, OT cannot always outperform MRT due to the significant noise enhancement when OT intends to cancel CCI, depending on the combination of the antennas at the transmitter and the receiver, number of interference and the statistical characteristics of the channel.

  7. Magnetic high throughput screening system for the development of nano-sized molecularly imprinted polymers for controlled delivery of curcumin.

    Science.gov (United States)

    Piletska, Elena V; Abd, Bashar H; Krakowiak, Agata S; Parmar, Anitha; Pink, Demi L; Wall, Katie S; Wharton, Luke; Moczko, Ewa; Whitcombe, Michael J; Karim, Kal; Piletsky, Sergey A

    2015-05-07

    Curcumin is a versatile anti-inflammatory and anti-cancer agent known for its low bioavailability, which could be improved by developing materials capable of binding and releasing drug in a controlled fashion. The present study describes the preparation of magnetic nano-sized Molecularly Imprinted Polymers (nanoMIPs) for the controlled delivery of curcumin and their high throughput characterisation using microtitre plates modified with magnetic inserts. NanoMIPs were synthesised using functional monomers chosen with the aid of molecular modelling. The rate of release of curcumin from five polymers was studied under aqueous conditions and was found to correlate well with the binding energies obtained computationally. The presence of specific monomers was shown to be significant in ensuring effective binding of curcumin and to the rate of release obtained. Characterisation of the polymer particles was carried out using dynamic light scattering (DLS) technique and scanning electron microscopy (SEM) in order to establish the relationship between irradiation time and particle size. The protocols optimised during this study could be used as a blueprint for the development of nanoMIPs capable of the controlled release of potentially any compound of interest.

  8. A high-throughput fluorescence-based assay system for appetite-regulating gene and drug screening.

    Directory of Open Access Journals (Sweden)

    Yasuhito Shimada

    Full Text Available The increasing number of people suffering from metabolic syndrome and obesity is becoming a serious problem not only in developed countries, but also in developing countries. However, there are few agents currently approved for the treatment of obesity. Those that are available are mainly appetite suppressants and gastrointestinal fat blockers. We have developed a simple and rapid method for the measurement of the feeding volume of Danio rerio (zebrafish. This assay can be used to screen appetite suppressants and enhancers. In this study, zebrafish were fed viable paramecia that were fluorescently-labeled, and feeding volume was measured using a 96-well microplate reader. Gene expression analysis of brain-derived neurotrophic factor (bdnf, knockdown of appetite-regulating genes (neuropeptide Y, preproinsulin, melanocortin 4 receptor, agouti related protein, and cannabinoid receptor 1, and the administration of clinical appetite suppressants (fluoxetine, sibutramine, mazindol, phentermine, and rimonabant revealed the similarity among mechanisms regulating appetite in zebrafish and mammals. In combination with behavioral analysis, we were able to evaluate adverse effects on locomotor activities from gene knockdown and chemical treatments. In conclusion, we have developed an assay that uses zebrafish, which can be applied to high-throughput screening and target gene discovery for appetite suppressants and enhancers.

  9. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...

  10. New generation pharmacogenomic tools: a SNP linkage disequilibrium Map, validated SNP assay resource, and high-throughput instrumentation system for large-scale genetic studies.

    Science.gov (United States)

    De La Vega, Francisco M; Dailey, David; Ziegle, Janet; Williams, Julie; Madden, Dawn; Gilbert, Dennis A

    2002-06-01

    Since public and private efforts announced the first draft of the human genome last year, researchers have reported great numbers of single nucleotide polymorphisms (SNPs). We believe that the availability of well-mapped, quality SNP markers constitutes the gateway to a revolution in genetics and personalized medicine that will lead to better diagnosis and treatment of common complex disorders. A new generation of tools and public SNP resources for pharmacogenomic and genetic studies--specifically for candidate-gene, candidate-region, and whole-genome association studies--will form part of the new scientific landscape. This will only be possible through the greater accessibility of SNP resources and superior high-throughput instrumentation-assay systems that enable affordable, highly productive large-scale genetic studies. We are contributing to this effort by developing a high-quality linkage disequilibrium SNP marker map and an accompanying set of ready-to-use, validated SNP assays across every gene in the human genome. This effort incorporates both the public sequence and SNP data sources, and Celera Genomics' human genome assembly and enormous resource ofphysically mapped SNPs (approximately 4,000,000 unique records). This article discusses our approach and methodology for designing the map, choosing quality SNPs, designing and validating these assays, and obtaining population frequency ofthe polymorphisms. We also discuss an advanced, high-performance SNP assay chemisty--a new generation of the TaqMan probe-based, 5' nuclease assay-and high-throughput instrumentation-software system for large-scale genotyping. We provide the new SNP map and validation information, validated SNP assays and reagents, and instrumentation systems as a novel resource for genetic discoveries.

  11. Engineered design features in the HI-STAR/HI-STORM systems to maximize ALARA, safety, and community acceptance

    International Nuclear Information System (INIS)

    Blessing, Christian

    2003-01-01

    Heltec International is a U.S. corporation headquartered in New Jersey, dedicated to providing capital goods and technical services to the power industry. Over 75 percent of the company's product output is destined for nuclear power plants. Holter counts among its active clients a majority of the nuclear plants in the United States, as well as Korea, Taiwan, Mexico, and Brazil. The company also has a growing market presence in Japan and the European Union. Leading U.S. nuclear plant owners, such as Entergy, Exelon, FPL, Southern Nuclear, PG and E and TVA have a long-term and continuous business relationship with Holtec International. This article deals with Holtec dry storage system description, the multi-purpose canister, hi-star 100 overpack, hi-storm 100 overpack and unique advantages of holtec's dry storage technology

  12. Maximize Minimum Utility Function of Fractional Cloud Computing System Based on Search Algorithm Utilizing the Mittag-Leffler Sum

    Directory of Open Access Journals (Sweden)

    Rabha W. Ibrahim

    2018-01-01

    Full Text Available The maximum min utility function (MMUF problem is an important representative of a large class of cloud computing systems (CCS. Having numerous applications in practice, especially in economy and industry. This paper introduces an effective solution-based search (SBS algorithm for solving the problem MMUF. First, we suggest a new formula of the utility function in term of the capacity of the cloud. We formulate the capacity in CCS, by using a fractional diffeo-integral equation. This equation usually describes the flow of CCS. The new formula of the utility function is modified recent active utility functions. The suggested technique first creates a high-quality initial solution by eliminating the less promising components, and then develops the quality of the achieved solution by the summation search solution (SSS. This method is considered by the Mittag-Leffler sum as hash functions to determine the position of the agent. Experimental results commonly utilized in the literature demonstrate that the proposed algorithm competes approvingly with the state-of-the-art algorithms both in terms of solution quality and computational efficiency.

  13. Qgui: A high-throughput interface for automated setup and analysis of free energy calculations and empirical valence bond simulations in biological systems.

    Science.gov (United States)

    Isaksen, Geir Villy; Andberg, Tor Arne Heim; Åqvist, Johan; Brandsdal, Bjørn Olav

    2015-07-01

    Structural information and activity data has increased rapidly for many protein targets during the last decades. In this paper, we present a high-throughput interface (Qgui) for automated free energy and empirical valence bond (EVB) calculations that use molecular dynamics (MD) simulations for conformational sampling. Applications to ligand binding using both the linear interaction energy (LIE) method and the free energy perturbation (FEP) technique are given using the estrogen receptor (ERα) as a model system. Examples of free energy profiles obtained using the EVB method for the rate-limiting step of the enzymatic reaction catalyzed by trypsin are also shown. In addition, we present calculation of high-precision Arrhenius plots to obtain the thermodynamic activation enthalpy and entropy with Qgui from running a large number of EVB simulations. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Tri-maximal vs. bi-maximal neutrino mixing

    International Nuclear Information System (INIS)

    Scott, W.G

    2000-01-01

    It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects

  15. High-throughput determination of vancomycin in human plasma by a cost-effective system of two-dimensional liquid chromatography.

    Science.gov (United States)

    Sheng, Yanghao; Zhou, Boting

    2017-05-26

    Therapeutic drug monitoring (TDM) is one of the most important services of clinical laboratories. Two main techniques are commonly used: the immunoassay and chromatography method. We have developed a cost-effective system of two-dimensional liquid chromatography with ultraviolet detection (2D-LC-UV) for high-throughput determination of vancomycin in human plasma that combines the automation and low start-up costs of the immunoassay with the high selectivity and sensitivity of the liquid chromatography coupled with mass spectrometric detection without incurring their disadvantages, achieving high cost-effectiveness. This 2D-LC system offers a large volume injection to provide sufficient sensitivity and uses simulated gradient peak compression technology to control peak broadening and to improve peak shape. A middle column was added to reduce the analysis cycle time and make it suitable for high-throughput routine clinical assays. The analysis cycle time was 4min and the peak width was 0.8min. Compared with other chromatographic methods that have been developed, the analysis cycle time and peak width for vancomycin was reduced significantly. The lower limit of quantification was 0.20μg/mL for vancomycin, which is the same as certain LC-MS/MS methods that have been recently developed and validated. The method is rapid, automated, and low-cost and has high selectivity and sensitivity for the quantification of vancomycin in human plasma, thus making it well-suited for use in hospital clinical laboratories. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Management of High-Throughput DNA Sequencing Projects: Alpheus.

    Science.gov (United States)

    Miller, Neil A; Kingsmore, Stephen F; Farmer, Andrew; Langley, Raymond J; Mudge, Joann; Crow, John A; Gonzalez, Alvaro J; Schilkey, Faye D; Kim, Ryan J; van Velkinburgh, Jennifer; May, Gregory D; Black, C Forrest; Myers, M Kathy; Utsey, John P; Frost, Nicholas S; Sugarbaker, David J; Bueno, Raphael; Gullans, Stephen R; Baxter, Susan M; Day, Steve W; Retzel, Ernest F

    2008-12-26

    High-throughput DNA sequencing has enabled systems biology to begin to address areas in health, agricultural and basic biological research. Concomitant with the opportunities is an absolute necessity to manage significant volumes of high-dimensional and inter-related data and analysis. Alpheus is an analysis pipeline, database and visualization software for use with massively parallel DNA sequencing technologies that feature multi-gigabase throughput characterized by relatively short reads, such as Illumina-Solexa (sequencing-by-synthesis), Roche-454 (pyrosequencing) and Applied Biosystem's SOLiD (sequencing-by-ligation). Alpheus enables alignment to reference sequence(s), detection of variants and enumeration of sequence abundance, including expression levels in transcriptome sequence. Alpheus is able to detect several types of variants, including non-synonymous and synonymous single nucleotide polymorphisms (SNPs), insertions/deletions (indels), premature stop codons, and splice isoforms. Variant detection is aided by the ability to filter variant calls based on consistency, expected allele frequency, sequence quality, coverage, and variant type in order to minimize false positives while maximizing the identification of true positives. Alpheus also enables comparisons of genes with variants between cases and controls or bulk segregant pools. Sequence-based differential expression comparisons can be developed, with data export to SAS JMP Genomics for statistical analysis.

  17. Student throughput variables and properties: Varying cohort sizes

    Directory of Open Access Journals (Sweden)

    Lucas C.A. Stoop

    2017-11-01

    Full Text Available A recent research paper described how student throughput variables and properties combine to explain the behaviour of stationary or simplified throughput systems. Such behaviour can be understood in terms of the locus of a point in the triangular admissible region of the H-S plane, where H represents headcounts and S successful credits, each depending on the system properties at that point. The efficiency of the student throughput process is given by the ratio S/H. Simplified throughput systems are characterised by stationary graduation and dropout patterns of students as well as by annual intakes of student cohorts of equal size. The effect of varying the size of the annual intakes of student cohorts is reported on here. The observations made lead to the establishment of a more generalised student throughput theory which includes the simplified theory as a special case. The generalised theory still retains the notion of a triangular admissible region in the H-S plane but with the size and shape of the triangle depending on the size of the student cohorts. The ratio S/H again emerges as the process efficiency measure for throughput systems in general with unchanged roles assigned to important system properties. This theory provides for a more fundamental understanding of student throughput systems encountered in real life. Significance: A generalised stationary student throughput theory through varying cohort sizes allows for a far better understanding of real student throughput systems.

  18. Bipartite Bell Inequality and Maximal Violation

    International Nuclear Information System (INIS)

    Li Ming; Fei Shaoming; Li-Jost Xian-Qing

    2011-01-01

    We present new bell inequalities for arbitrary dimensional bipartite quantum systems. The maximal violation of the inequalities is computed. The Bell inequality is capable of detecting quantum entanglement of both pure and mixed quantum states more effectively. (general)

  19. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  20. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  1. MAXIM: The Blackhole Imager

    Science.gov (United States)

    Gendreau, Keith; Cash, Webster; Gorenstein, Paul; Windt, David; Kaaret, Phil; Reynolds, Chris

    2004-01-01

    The Beyond Einstein Program in NASA's Office of Space Science Structure and Evolution of the Universe theme spells out the top level scientific requirements for a Black Hole Imager in its strategic plan. The MAXIM mission will provide better than one tenth of a microarcsecond imaging in the X-ray band in order to satisfy these requirements. We will overview the driving requirements to achieve these goals and ultimately resolve the event horizon of a supermassive black hole. We will present the current status of this effort that includes a study of a baseline design as well as two alternative approaches.

  2. Social group utility maximization

    CERN Document Server

    Gong, Xiaowen; Yang, Lei; Zhang, Junshan

    2014-01-01

    This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b

  3. Development of a quantitative assay amenable for high-throughput screening to target the type II secretion system for new treatments against plant-pathogenic bacteria.

    Science.gov (United States)

    Tran, Nini; Zielke, Ryszard A; Vining, Oliver B; Azevedo, Mark D; Armstrong, Donald J; Banowetz, Gary M; McPhail, Kerry L; Sikora, Aleksandra E

    2013-09-01

    Plant-pathogenic bacteria are the causative agents of diseases in important agricultural crops and ornamental plants. The severe economic burden of these diseases requires seeking new approaches for their control, particularly because phytopathogenic bacteria are often resistant to available treatments. The type II secretion (T2S) system is a key virulence factor used by major groups of phytopathogenic bacteria. The T2S machinery transports many hydrolytic enzymes responsible for degradation of the plant cell wall, thus enabling successful colonization and dissemination of the bacteria in the plant host. The genetic inactivation of the T2S system leads to loss of virulence, which strongly suggests that targeting the T2S could enable new treatments against plant-pathogenic bacteria. Accordingly, we have designed and optimized an assay to identify small-molecule inhibitors of the T2S system. This assay uses a double parametric output: measurement of bacterial growth and the enzymatic activity of cellulase, which is secreted via the T2S pathway in our model organism Dickeya dadantii. The assay was evaluated by screening natural extracts, culture filtrates isolated from rhizosphere bacteria, and a collection of pharmaceutically active compounds in LOPAC(1280). The calculated Z' values of 0.63, 0.63, and 0.58, respectively, strongly suggest that the assay is applicable for a high-throughput screening platform.

  4. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn by Measurement of Oil Content.

    Directory of Open Access Journals (Sweden)

    Hongzhi Wang

    Full Text Available One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed.

  5. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn) by Measurement of Oil Content

    Science.gov (United States)

    Xu, Xiaoping; Huang, Qingming; Chen, Shanshan; Yang, Peiqiang; Chen, Shaojiang; Song, Yiqiao

    2016-01-01

    One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed. PMID:27454427

  6. A new sieving matrix for DNA sequencing, genotyping and mutation detection and high-throughput genotyping with a 96-capillary array system

    Energy Technology Data Exchange (ETDEWEB)

    Gao, David [Iowa State Univ., Ames, IA (United States)

    1999-11-08

    Capillary electrophoresis has been widely accepted as a fast separation technique in DNA analysis. In this dissertation, a new sieving matrix is described for DNA analysis, especially DNA sequencing, genetic typing and mutation detection. A high-throughput 96 capillary array electrophoresis system was also demonstrated for simultaneous multiple genotyping. The authors first evaluated the influence of different capillary coatings on the performance of DNA sequencing. A bare capillary was compared with a DB-wax, an FC-coated and a polyvinylpyrrolidone dynamically coated capillary with PEO as sieving matrix. It was found that covalently-coated capillaries had no better performance than bare capillaries while PVP coating provided excellent and reproducible results. The authors also developed a new sieving Matrix for DNA separation based on commercially available poly(vinylpyrrolidone) (PVP). This sieving matrix has a very low viscosity and an excellent self-coating effect. Successful separations were achieved in uncoated capillaries. Sequencing of M13mp18 showed good resolution up to 500 bases in treated PVP solution. Temperature gradient capillary electrophoresis and PVP solution was applied to mutation detection. A heteroduplex sample and a homoduplex reference were injected during a pair of continuous runs. A temperature gradient of 10 C with a ramp of 0.7 C/min was swept throughout the capillary. Detection was accomplished by laser induced fluorescence detection. Mutation detection was performed by comparing the pattern changes between the homoduplex and the heteroduplex samples. High throughput, high detection rate and easy operation were achieved in this system. They further demonstrated fast and reliable genotyping based on CTTv STR system by multiple-capillary array electrophoresis. The PCR products from individuals were mixed with pooled allelic ladder as an absolute standard and coinjected with a 96-vial tray. Simultaneous one-color laser-induced fluorescence

  7. Establishment and Application of a High Throughput Screening System Targeting the Interaction between HCV Internal Ribosome Entry Site and Human Eukaryotic Translation Initiation Factor 3

    Directory of Open Access Journals (Sweden)

    Yuying Zhu

    2017-05-01

    Full Text Available Viruses are intracellular obligate parasites and the host cellular machinery is usually recruited for their replication. Human eukaryotic translation initiation factor 3 (eIF3 could be directly recruited by the hepatitis C virus (HCV internal ribosome entry site (IRES to promote the translation of viral proteins. In this study, we establish a fluorescence polarization (FP based high throughput screening (HTS system targeting the interaction between HCV IRES and eIF3. By screening a total of 894 compounds with this HTS system, two compounds (Mucl39526 and NP39 are found to disturb the interaction between HCV IRES and eIF3. And these two compounds are further demonstrated to inhibit the HCV IRES-dependent translation in vitro. Thus, this HTS system is functional to screen the potential HCV replication inhibitors targeting human eIF3, which is helpful to overcome the problem of viral resistance. Surprisingly, one compound HP-3, a kind of oxytocin antagonist, is discovered to significantly enhance the interaction between HCV IRES and eIF3 by this HTS system. HP-3 is demonstrated to directly interact with HCV IRES and promote the HCV IRES-dependent translation both in vitro and in vivo, which strongly suggests that HP-3 has potentials to promote HCV replication. Therefore, this HTS system is also useful to screen the potential HCV replication enhancers, which is meaningful for understanding the viral replication and screening novel antiviral drugs. To our knowledge, this is the first HTS system targeting the interaction between eIF3 and HCV IRES, which could be applied to screen both potential HCV replication inhibitors and enhancers.

  8. High-throughput screening (HTS) and modeling of the retinoid ...

    Science.gov (United States)

    Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system Presentation at the Retinoids Review 2nd workshop in Brussels, Belgium on the application of high throughput screening and model to the retinoid system

  9. A case study for cloud based high throughput analysis of NGS data using the globus genomics system

    Directory of Open Access Journals (Sweden)

    Krithika Bhuvaneshwar

    2015-01-01

    Full Text Available Next generation sequencing (NGS technologies produce massive amounts of data requiring a powerful computational infrastructure, high quality bioinformatics software, and skilled personnel to operate the tools. We present a case study of a practical solution to this data management and analysis challenge that simplifies terabyte scale data handling and provides advanced tools for NGS data analysis. These capabilities are implemented using the “Globus Genomics” system, which is an enhanced Galaxy workflow system made available as a service that offers users the capability to process and transfer data easily, reliably and quickly to address end-to-endNGS analysis requirements. The Globus Genomics system is built on Amazon's cloud computing infrastructure. The system takes advantage of elastic scaling of compute resources to run multiple workflows in parallel and it also helps meet the scale-out analysis needs of modern translational genomics research.

  10. Maximal Abelian sets of roots

    CERN Document Server

    Lawther, R

    2018-01-01

    In this work the author lets \\Phi be an irreducible root system, with Coxeter group W. He considers subsets of \\Phi which are abelian, meaning that no two roots in the set have sum in \\Phi \\cup \\{ 0 \\}. He classifies all maximal abelian sets (i.e., abelian sets properly contained in no other) up to the action of W: for each W-orbit of maximal abelian sets we provide an explicit representative X, identify the (setwise) stabilizer W_X of X in W, and decompose X into W_X-orbits. Abelian sets of roots are closely related to abelian unipotent subgroups of simple algebraic groups, and thus to abelian p-subgroups of finite groups of Lie type over fields of characteristic p. Parts of the work presented here have been used to confirm the p-rank of E_8(p^n), and (somewhat unexpectedly) to obtain for the first time the 2-ranks of the Monster and Baby Monster sporadic groups, together with the double cover of the latter. Root systems of classical type are dealt with quickly here; the vast majority of the present work con...

  11. A Single-Batch Fermentation System to Simulate Human Colonic Microbiota for High-Throughput Evaluation of Prebiotics

    Science.gov (United States)

    Sasaki, Daisuke; Fukuda, Itsuko; Tanaka, Kosei; Yoshida, Ken-ichi; Kondo, Akihiko; Osawa, Ro

    2016-01-01

    We devised a single-batch fermentation system to simulate human colonic microbiota from fecal samples, enabling the complex mixture of microorganisms to achieve densities of up to 1011 cells/mL in 24 h. 16S rRNA gene sequence analysis of bacteria grown in the system revealed that representatives of the major phyla, including Bacteroidetes, Firmicutes, and Actinobacteria, as well as overall species diversity, were consistent with those of the original feces. On the earlier stages of fermentation (up to 9 h), trace mixtures of acetate, lactate, and succinate were detectable; on the later stages (after 24 h), larger amounts of acetate accumulated along with some of propionate and butyrate. These patterns were similar to those observed in the original feces. Thus, this system could serve as a simple model to simulate the diversity as well as the metabolism of human colonic microbiota. Supplementation of the system with several prebiotic oligosaccharides (including fructo-, galacto-, isomalto-, and xylo-oligosaccharides; lactulose; and lactosucrose) resulted in an increased population in genus Bifidobacterium, concomitant with significant increases in acetate production. The results suggested that this fermentation system may be useful for in vitro, pre-clinical evaluation of the effects of prebiotics prior to testing in humans. PMID:27483470

  12. Systems biology definition of the core proteome of metabolism and expression is consistent with high-throughput data

    DEFF Research Database (Denmark)

    Yang, Laurence; Tan, Justin; O'Brien, Edward J.

    2015-01-01

    based on proteomics data. This systems biology core proteome includes 212 genes not found in previous comparative genomics-based core proteome definitions, accounts for 65% of known essential genes in E. coli, and has 78% gene function overlap with minimal genomes (Buchnera aphidicola and Mycoplasma......Finding the minimal set of gene functions needed to sustain life is of both fundamental and practical importance. Minimal gene lists have been proposed by using comparative genomics-based core proteome definitions. A definition of a core proteome that is supported by empirical data, is understood...... at the systems-level, and provides a basis for computing essential cell functions is lacking. Here, we use a systems biology-based genome-scale model of metabolism and expression to define a functional core proteome consisting of 356 gene products, accounting for 44% of the Escherichia coli proteome by mass...

  13. Maximizing post-stroke upper limb rehabilitation using a novel telerehabilitation interactive virtual reality system in the patient's home: study protocol of a randomized clinical trial.

    Science.gov (United States)

    Kairy, Dahlia; Veras, Mirella; Archambault, Philippe; Hernandez, Alejandro; Higgins, Johanne; Levin, Mindy F; Poissant, Lise; Raz, Amir; Kaizer, Franceen

    2016-03-01

    Telerehabilitation (TR), or the provision of rehabilitation services from a distance using telecommunication tools such as the Internet, can contribute to ensure that patients receive the best care at the right time. This study aims to assess the effect of an interactive virtual reality (VR) system that allows ongoing rehabilitation of the upper extremity (UE) following a stroke, while the person is in their own home, with offline monitoring and feedback from a therapist at a distance. A single-blind (evaluator is blind to group assignment) two-arm randomized controlled trial is proposed, with participants who have had a stroke and are no longer receiving rehabilitation services randomly allocated to: (1) 4-week written home exercise program, i.e. usual care discharge home program or (2) a 4-week home-based TR exercise program using VR in addition to usual care i.e. treatment group. Motor recovery of the UE will be assessed using the Fugl-Meyer Assessment-UE and the Box and Block tests. To determine the efficacy of the system in terms of functional recovery, the Motor Activity Log, a self-reported measure of UE use will be used. Impact on quality of life will be determined using the Stroke Impact Scale-16. Lastly, a preliminary cost-effectiveness analysis will be conducted using costs and outcomes for all groups. Findings will contribute to evidence regarding the use of TR and VR to provide stroke rehabilitation services from a distance. This approach can enhance continuity of care once patients are discharged from rehabilitation, in order to maximize their recovery beyond the current available services. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. A high throughput screening system for determining the three actions of insecticides against Aedes aegypti (Diptera: Culicidae) populations in Thailand

    Science.gov (United States)

    Chemicals can protect humans from the bites 30 of hemophagous arthropods through three different primary actions; irritancy (excitation), repellency, or toxicity; actions that can be evaluated using a laboratory-based assay system. In this study, the deterrent and toxic actions of three synthetic py...

  15. High-throughput determination of octanol/water partition coefficients using a shake-flask method and novel two-phase solvent system.

    Science.gov (United States)

    Morikawa, Go; Suzuka, Chihiro; Shoji, Atsushi; Shibusawa, Yoichi; Yanagida, Akio

    2016-01-05

    A high-throughput method for determining the octanol/water partition coefficient (P(o/w)) of a large variety of compounds exhibiting a wide range in hydrophobicity was established. The method combines a simple shake-flask method with a novel two-phase solvent system comprising an acetonitrile-phosphate buffer (0.1 M, pH 7.4)-1-octanol (25:25:4, v/v/v; AN system). The AN system partition coefficients (K(AN)) of 51 standard compounds for which log P(o/w) (at pH 7.4; log D) values had been reported were determined by single two-phase partitioning in test tubes, followed by measurement of the solute concentration in both phases using an automatic flow injection-ultraviolet detection system. The log K(AN) values were closely related to reported log D values, and the relationship could be expressed by the following linear regression equation: log D=2.8630 log K(AN) -0.1497(n=51). The relationship reveals that log D values (+8 to -8) for a large variety of highly hydrophobic and/or hydrophilic compounds can be estimated indirectly from the narrow range of log K(AN) values (+3 to -3) determined using the present method. Furthermore, log K(AN) values for highly polar compounds for which no log D values have been reported, such as amino acids, peptides, proteins, nucleosides, and nucleotides, can be estimated using the present method. The wide-ranging log D values (+5.9 to -7.5) of these molecules were estimated for the first time from their log K(AN) values and the above regression equation. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. High-throughput typing method to identify a non-outbreak-involved Legionella pneumophila strain colonizing the entire water supply system in the town of Rennes, France.

    Science.gov (United States)

    Sobral, D; Le Cann, P; Gerard, A; Jarraud, S; Lebeau, B; Loisy-Hamon, F; Vergnaud, G; Pourcel, C

    2011-10-01

    Two legionellosis outbreaks occurred in the city of Rennes, France, during the past decade, requiring in-depth monitoring of Legionella pneumophila in the water network and the cooling towers in the city. In order to characterize the resulting large collection of isolates, an automated low-cost typing method was developed. The multiplex capillary-based variable-number tandem repeat (VNTR) (multiple-locus VNTR analysis [MLVA]) assay requiring only one PCR amplification per isolate ensures a high level of discrimination and reduces hands-on and time requirements. In less than 2 days and using one 4-capillary apparatus, 217 environmental isolates collected between 2000 and 2009 and 5 clinical isolates obtained during outbreaks in 2000 and 2006 in Rennes were analyzed, and 15 different genotypes were identified. A large cluster of isolates with closely related genotypes and representing 77% of the population was composed exclusively of environmental isolates extracted from hot water supply systems. It was not responsible for the known Rennes epidemic cases, although strains showing a similar MLVA profile have regularly been involved in European outbreaks. The clinical isolates in Rennes had the same genotype as isolates contaminating a mall's cooling tower. This study further demonstrates that unknown environmental or genetic factors contribute to the pathogenicity of some strains. This work illustrates the potential of the high-throughput MLVA typing method to investigate the origin of legionellosis cases by allowing the systematic typing of any new isolate and inclusion of data in shared databases.

  17. Designing and Validating Ternary Pd Alloys for Optimum Sulfur/Carbon Resistance in Hydrogen Separation and Carbon Capture Membrane Systems Using High-Throughput Combinatorial Methods

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Amanda [Pall Corporation, Port Washington, NY (United States); Zhao, Hongbin [Pall Corporation, Port Washington, NY (United States); Hopkins, Scott [Pall Corporation, Port Washington, NY (United States)

    2014-12-01

    This report summarizes the work completed under the U.S. Department of Energy Project Award No.: DE-FE0001181 titled “Designing and Validating Ternary Pd Alloys for Optimum Sulfur/Carbon Resistance in Hydrogen Separation and Carbon Capture Membrane Systems Using High-Throughput Combinatorial Methods.” The project started in October 1, 2009 and was finished September 30, 2014. Pall Corporation worked with Cornell University to sputter and test palladium-based ternary alloys onto silicon wafers to examine many alloys at once. With the specialized equipment at Georgia Institute of Technology that analyzed the wafers for adsorbed carbon and sulfur species six compositions were identified to have resistance to carbon and sulfur species. These compositions were deposited on Pall AccuSep® supports by Colorado School of Mines and then tested in simulated synthetic coal gas at the Pall Corporation. Two of the six alloys were chosen for further investigations based on their performance. Alloy reproducibility and long-term testing of PdAuAg and PdZrAu provided insight to the ability to manufacture these compositions for testing. PdAuAg is the most promising alloy found in this work based on the fabrication reproducibility and resistance to carbon and sulfur. Although PdZrAu had great initial resistance to carbon and sulfur species, the alloy composition has a very narrow range that hindered testing reproducibility.

  18. A systemic gene silencing method suitable for high throughput, reverse genetic analyses of gene function in fern gametophytes

    Directory of Open Access Journals (Sweden)

    Tanurdzic Milos

    2004-04-01

    Full Text Available Abstract Background Ceratopteris richardii is a useful experimental system for studying gametophyte development and sexual reproduction in plants. However, few tools for cloning mutant genes or disrupting gene function exist for this species. The feasibility of systemic gene silencing as a reverse genetics tool was examined in this study. Results Several DNA constructs targeting a Ceratopteris protoporphyrin IX magnesium chelatase (CrChlI gene that is required for chlorophyll biosynthesis were each introduced into young gametophytes by biolistic delivery. Their transient expression in individual cells resulted in a colorless cell phenotype that affected most cells of the mature gametophyte, including the meristem and gametangia. The colorless phenotype was associated with a 7-fold decrease in the abundance of the endogenous transcript. While a construct designed to promote the transient expression of a CrChlI double stranded, potentially hairpin-forming RNA was found to be the most efficient in systemically silencing the endogenous gene, a plasmid containing the CrChlI cDNA insert alone was sufficient to induce silencing. Bombarded, colorless hermaphroditic gametophytes produced colorless embryos following self-fertilization, demonstrating that the silencing signal could be transmitted through gametogenesis and fertilization. Bombardment of young gametophytes with constructs targeting the Ceratopteris filamentous temperature sensitive (CrFtsZ and uroporphyrin dehydrogenase (CrUrod genes also produced the expected mutant phenotypes. Conclusion A method that induces the systemic silencing of target genes in the Ceratopteris gametophyte is described. It provides a simple, inexpensive and rapid means to test the functions of genes involved in gametophyte development, especially those involved in cellular processes common to all plants.

  19. Assessing the impact of water treatment on bacterial biofilms in drinking water distribution systems using high-throughput DNA sequencing.

    Science.gov (United States)

    Shaw, Jennifer L A; Monis, Paul; Fabris, Rolando; Ho, Lionel; Braun, Kalan; Drikas, Mary; Cooper, Alan

    2014-12-01

    Biofilm control in drinking water distribution systems (DWDSs) is crucial, as biofilms are known to reduce flow efficiency, impair taste and quality of drinking water and have been implicated in the transmission of harmful pathogens. Microorganisms within biofilm communities are more resistant to disinfection compared to planktonic microorganisms, making them difficult to manage in DWDSs. This study evaluates the impact of four unique drinking water treatments on biofilm community structure using metagenomic DNA sequencing. Four experimental DWDSs were subjected to the following treatments: (1) conventional coagulation, (2) magnetic ion exchange contact (MIEX) plus conventional coagulation, (3) MIEX plus conventional coagulation plus granular activated carbon, and (4) membrane filtration (MF). Bacterial biofilms located inside the pipes of each system were sampled under sterile conditions both (a) immediately after treatment application ('inlet') and (b) at a 1 km distance from the treatment application ('outlet'). Bacterial 16S rRNA gene sequencing revealed that the outlet biofilms were more diverse than those sampled at the inlet for all treatments. The lowest number of unique operational taxonomic units (OTUs) and lowest diversity was observed in the MF inlet. However, the MF system revealed the greatest increase in diversity and OTU count from inlet to outlet. Further, the biofilm communities at the outlet of each system were more similar to one another than to their respective inlet, suggesting that biofilm communities converge towards a common established equilibrium as distance from treatment application increases. Based on the results, MF treatment is most effective at inhibiting biofilm growth, but a highly efficient post-treatment disinfection regime is also critical in order to prevent the high rates of post-treatment regrowth. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Helios: History and Anatomy of a Successful In-House Enterprise High-Throughput Screening and Profiling Data Analysis System.

    Science.gov (United States)

    Gubler, Hanspeter; Clare, Nicholas; Galafassi, Laurent; Geissler, Uwe; Girod, Michel; Herr, Guy

    2018-06-01

    We describe the main characteristics of the Novartis Helios data analysis software system (Novartis, Basel, Switzerland) for plate-based screening and profiling assays, which was designed and built about 11 years ago. It has been in productive use for more than 10 years and is one of the important standard software applications running for a large user community at all Novartis Institutes for BioMedical Research sites globally. A high degree of automation is reached by embedding the data analysis capabilities into a software ecosystem that deals with the management of samples, plates, and result data files, including automated data loading. The application provides a series of analytical procedures, ranging from very simple to advanced, which can easily be assembled by users in very flexible ways. This also includes the automatic derivation of a large set of quality control (QC) characteristics at every step. Any of the raw, intermediate, and final results and QC-relevant quantities can be easily explored through linked visualizations. Links to global assay metadata management, data warehouses, and an electronic lab notebook system are in place. Automated transfer of relevant data to data warehouses and electronic lab notebook systems are also implemented.

  1. Practical Maintenance of Digital Systems: Guidance to Maximize the Benefits of Digital Technology for the Maintenance of Digital Systems and Plant Equipment

    Energy Technology Data Exchange (ETDEWEB)

    Hill, D; Scarola, K

    2004-10-30

    This report presents detailed guidance for the maintenance and testing of modern digital systems. The guidance provides practical means for plants to take advantage of the increased diagnostic and self-test capabilities of these systems. It helps plants avoid mistakes in design and installation that could lead to increased maintenance burden and decreased system reliability and availability.

  2. Practical Maintenance of Digital Systems. Guidance to Maximize the Benefits of Digital Technology for the Maintenance of Digital Systems and Plant Equipment

    International Nuclear Information System (INIS)

    Hill, D.; Scarola, K.

    2004-01-01

    This report presents detailed guidance for the maintenance and testing of modern digital systems. The guidance provides practical means for plants to take advantage of the increased diagnostic and self-test capabilities of these systems. It helps plants avoid mistakes in design and installation that could lead to increased maintenance burden and decreased system reliability and availability

  3. Extraction and Purification of Quercitrin, Hyperoside, Rutin, and Afzelin from Zanthoxylum Bungeanum Maxim Leaves Using an Aqueous Two-Phase System.

    Science.gov (United States)

    He, Fengyuan; Li, Dengwu; Wang, Dongmei; Deng, Ming

    2016-07-01

    In this study, an aqueous two-phase system (ATPS) based on ethanol/NaH2 PO4 was developed for the extraction and purification of quercitrin, hyperoside, rutin, and afzelin from Zanthoxylum bungeanum Maxim leaves. These 4 flavonoids were 1st extracted from dried Z. bungeanum leaves using a 60% ethanol solution and subsequently added to the ATPS for further purification. The partition behavior of the 4 flavonoids in ATPS was investigated. The optimal ATPS conditions were: 29% (w/w) NaH2 PO4 , 25% (w/w) ethanol concentration, 1% (w/w) added amount of leaf extracts, no pH adjustment, and repeated 1 h extractions at 25 °C. Under the optimal conditions for the 10 g ATPS, the absolute recovery of quercitrin, hyperoside, rutin, and afzelin reached 90.3%, 83.5%, 92.3%, and 89.1%, respectively. Compared to the 60% ethanol extracts, the content of quercitrin (44.8 mg/g), hyperoside (65.6 mg/g), rutin (56.4 mg/g), and afzelin (6.84 mg/g) in the extracts increased by 49.9%, 38.8%, 45.6%, and 36.8% respectively. The extracts after ATPS also exhibited stronger antioxidant activities, the 2,2-diphenyl-1-picrylhydrazyl IC50 value (10.5 μg/mL) decreased by 41.8%, and the 2,2-azinobis(3-ethylbenzothiazoline-6-sulfonic acid) diammonium salt value (966 μmol Trolox/g) and ferric reducing power value (619 μmol Trolox/g) increased by 29.8% and 53.7%, respectively. Furthermore, scale-up experiments indicated that a larger scale experiment was feasible for the purification of the 4 flavonoids. © 2016 Institute of Food Technologists®

  4. Rhizoslides: paper-based growth system for non-destructive, high throughput phenotyping of root development by means of image analysis.

    Science.gov (United States)

    Le Marié, Chantal; Kirchgessner, Norbert; Marschall, Daniela; Walter, Achim; Hund, Andreas

    2014-01-01

    and precise evaluation of root lengths in diameter classes, but had weaknesses with respect to image segmentation and analysis of root system architecture. A new technique has been established for non-destructive root growth studies and quantification of architectural traits beyond seedlings stages. However, automation of the scanning process and appropriate software remains the bottleneck for high throughput analysis.

  5. Maximal Bell's inequality violation for non-maximal entanglement

    International Nuclear Information System (INIS)

    Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.

    2004-01-01

    Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil

  6. High throughput analysis reveals dissociable gene expression profiles in two independent neural systems involved in the regulation of social behavior

    Directory of Open Access Journals (Sweden)

    Stevenson Tyler J

    2012-10-01

    Full Text Available Abstract Background Production of contextually appropriate social behaviors involves integrated activity across many brain regions. Many songbird species produce complex vocalizations called ‘songs’ that serve to attract potential mates, defend territories, and/or maintain flock cohesion. There are a series of discrete interconnect brain regions that are essential for the successful production of song. The probability and intensity of singing behavior is influenced by the reproductive state. The objectives of this study were to examine the broad changes in gene expression in brain regions that control song production with a brain region that governs the reproductive state. Results We show using microarray cDNA analysis that two discrete brain systems that are both involved in governing singing behavior show markedly different gene expression profiles. We found that cortical and basal ganglia-like brain regions that control the socio-motor production of song in birds exhibit a categorical switch in gene expression that was dependent on their reproductive state. This pattern is in stark contrast to the pattern of expression observed in a hypothalamic brain region that governs the neuroendocrine control of reproduction. Subsequent gene ontology analysis revealed marked variation in the functional categories of active genes dependent on reproductive state and anatomical localization. HVC, one cortical-like structure, displayed significant gene expression changes associated with microtubule and neurofilament cytoskeleton organization, MAP kinase activity, and steroid hormone receptor complex activity. The transitions observed in the preoptic area, a nucleus that governs the motivation to engage in singing, exhibited variation in functional categories that included thyroid hormone receptor activity, epigenetic and angiogenetic processes. Conclusions These findings highlight the importance of considering the temporal patterns of gene expression

  7. Fuel cell-based CHP system modelling using Artificial Neural Networks aimed at developing techno-economic efficiency maximization control systems

    International Nuclear Information System (INIS)

    Asensio, F.J.; San Martín, J.I.; Zamora, I.; Garcia-Villalobos, J.

    2017-01-01

    This paper focuses on the modelling of the performance of a Polymer Electrolyte Membrane Fuel Cell (PEMFC)-based cogeneration system to integrate it in hybrid and/or connected to grid systems and enable the optimization of the techno-economic efficiency of the system in which it is integrated. To this end, experimental tests on a PEMFC-based cogeneration system of 600 W of electrical power have been performed to train an Artificial Neural Network (ANN). Once the learning of the ANN, it has been able to emulate real operating conditions, such as the cooling water out temperature and the hydrogen consumption of the PEMFC depending on several variables, such as the electric power demanded, temperature of the inlet water flow to the cooling circuit, cooling water flow and the heat demanded to the CHP system. After analysing the results, it is concluded that the presented model reproduces with enough accuracy and precision the performance of the experimented PEMFC, thus enabling the use of the model and the ANN learning methodology to model other PEMFC-based cogeneration systems and integrate them in techno-economic efficiency optimization control systems. - Highlights: • The effect of the energy demand variation on the PEMFC's efficiency is predicted. • The model relies on experimental data obtained from a 600 W PEMFC. • It provides the temperature and the hydrogen consumption with good accuracy. • The range in which the global energy efficiency could be improved is provided.

  8. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  9. High throughput imaging cytometer with acoustic focussing.

    Science.gov (United States)

    Zmijan, Robert; Jonnalagadda, Umesh S; Carugo, Dario; Kochi, Yu; Lemm, Elizabeth; Packham, Graham; Hill, Martyn; Glynne-Jones, Peter

    2015-10-31

    We demonstrate an imaging flow cytometer that uses acoustic levitation to assemble cells and other particles into a sheet structure. This technique enables a high resolution, low noise CMOS camera to capture images of thousands of cells with each frame. While ultrasonic focussing has previously been demonstrated for 1D cytometry systems, extending the technology to a planar, much higher throughput format and integrating imaging is non-trivial, and represents a significant jump forward in capability, leading to diagnostic possibilities not achievable with current systems. A galvo mirror is used to track the images of the moving cells permitting exposure times of 10 ms at frame rates of 50 fps with motion blur of only a few pixels. At 80 fps, we demonstrate a throughput of 208 000 beads per second. We investigate the factors affecting motion blur and throughput, and demonstrate the system with fluorescent beads, leukaemia cells and a chondrocyte cell line. Cells require more time to reach the acoustic focus than beads, resulting in lower throughputs; however a longer device would remove this constraint.

  10. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...

  11. High throughput semiconductor deposition system

    Science.gov (United States)

    Young, David L.; Ptak, Aaron Joseph; Kuech, Thomas F.; Schulte, Kevin; Simon, John D.

    2017-11-21

    A reactor for growing or depositing semiconductor films or devices. The reactor may be designed for inline production of III-V materials grown by hydride vapor phase epitaxy (HVPE). The operating principles of the HVPE reactor can be used to provide a completely or partially inline reactor for many different materials. An exemplary design of the reactor is shown in the attached drawings. In some instances, all or many of the pieces of the reactor formed of quartz, such as welded quartz tubing, while other reactors are made from metal with appropriate corrosion resistant coatings such as quartz or other materials, e.g., corrosion resistant material, or stainless steel tubing or pipes may be used with a corrosion resistant material useful with HVPE-type reactants and gases. Using HVPE in the reactor allows use of lower-cost precursors at higher deposition rates such as in the range of 1 to 5 .mu.m/minute.

  12. Maximally causal quantum mechanics

    International Nuclear Information System (INIS)

    Roy, S.M.

    1998-01-01

    We present a new causal quantum mechanics in one and two dimensions developed recently at TIFR by this author and V. Singh. In this theory both position and momentum for a system point have Hamiltonian evolution in such a way that the ensemble of system points leads to position and momentum probability densities agreeing exactly with ordinary quantum mechanics. (author)

  13. Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-01-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.

  14. 3-Dimensional culture systems for anti-cancer compound profiling and high-throughput screening reveal increases in EGFR inhibitor-mediated cytotoxicity compared to monolayer culture systems.

    Science.gov (United States)

    Howes, Amy L; Richardson, Robyn D; Finlay, Darren; Vuori, Kristiina

    2014-01-01

    3-dimensional (3D) culture models have the potential to bridge the gap between monolayer cell culture and in vivo studies. To benefit anti-cancer drug discovery from 3D models, new techniques are needed that enable their use in high-throughput (HT) screening amenable formats. We have established miniaturized 3D culture methods robust enough for automated HT screens. We have applied these methods to evaluate the sensitivity of normal and tumorigenic breast epithelial cell lines against a panel of oncology drugs when cultured as monolayers (2D) and spheroids (3D). We have identified two classes of compounds that exhibit preferential cytotoxicity against cancer cells over normal cells when cultured as 3D spheroids: microtubule-targeting agents and epidermal growth factor receptor (EGFR) inhibitors. Further improving upon our 3D model, superior differentiation of EC50 values in the proof-of-concept screens was obtained by co-culturing the breast cancer cells with normal human fibroblasts and endothelial cells. Further, the selective sensitivity of the cancer cells towards chemotherapeutics was observed in 3D co-culture conditions, rather than as 2D co-culture monolayers, highlighting the importance of 3D cultures. Finally, we examined the putative mechanisms that drive the differing potency displayed by EGFR inhibitors. In summary, our studies establish robust 3D culture models of human cells for HT assessment of tumor cell-selective agents. This methodology is anticipated to provide a useful tool for the study of biological differences within 2D and 3D culture conditions in HT format, and an important platform for novel anti-cancer drug discovery.

  15. Maximizing ROI (return on information)

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, B.

    2000-05-01

    The role and importance of managing information are discussed, underscoring the importance by quoting from the report of the International Data Corporation, according to which Fortune 500 companies lost $ 12 billion in 1999 due to inefficiencies resulting from intellectual re-work, substandard performance , and inability to find knowledge resources. The report predicts that this figure will rise to $ 31.5 billion by 2003. Key impediments to implementing knowledge management systems are identified as : the cost and human resources requirement of deployment; inflexibility of historical systems to adapt to change; and the difficulty of achieving corporate acceptance of inflexible software products that require changes in 'normal' ways of doing business. The author recommends the use of model, document and rule-independent systems with a document centered interface (DCI), employing rapid application development (RAD) and object technologies and visual model development, which eliminate these problems, making it possible for companies to maximize their return on information (ROI), and achieve substantial savings in implementation costs.

  16. Maximal frustration as an immunological principle.

    Science.gov (United States)

    de Abreu, F Vistulo; Mostardinha, P

    2009-03-06

    A fundamental problem in immunology is that of understanding how the immune system selects promptly which cells to kill without harming the body. This problem poses an apparent paradox. Strong reactivity against pathogens seems incompatible with perfect tolerance towards self. We propose a different view on cellular reactivity to overcome this paradox: effector functions should be seen as the outcome of cellular decisions which can be in conflict with other cells' decisions. We argue that if cellular systems are frustrated, then extensive cross-reactivity among the elements in the system can decrease the reactivity of the system as a whole and induce perfect tolerance. Using numerical and mathematical analyses, we discuss two simple models that perform optimal pathogenic detection with no autoimmunity if cells are maximally frustrated. This study strongly suggests that a principle of maximal frustration could be used to build artificial immune systems. It would be interesting to test this principle in the real adaptive immune system.

  17. A THEORY OF MAXIMIZING SENSORY INFORMATION

    NARCIS (Netherlands)

    Hateren, J.H. van

    1992-01-01

    A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power

  18. Logit Analysis for Profit Maximizing Loan Classification

    OpenAIRE

    Watt, David L.; Mortensen, Timothy L.; Leistritz, F. Larry

    1988-01-01

    Lending criteria and loan classification methods are developed. Rating system breaking points are analyzed to present a method to maximize loan revenues. Financial characteristics of farmers are used as determinants of delinquency in a multivariate logistic model. Results indicate that debt-to-asset and operating ration are most indicative of default.

  19. On Throughput Improvement of Wireless Ad Hoc Networks with Hidden Nodes

    Science.gov (United States)

    Choi, Hong-Seok; Lim, Jong-Tae

    In this letter, we present the throughput analysis of the wireless ad hoc networks based on the IEEE 802.11 MAC (Medium Access Control). Especially, our analysis includes the case with the hidden node problem so that it can be applied to the multi-hop networks. In addition, we suggest a new channel access control algorithm to maximize the network throughput and show the usefulness of the proposed algorithm through simulations.

  20. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience.

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R; Bock, Davi D; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R Clay; Smith, Stephen J; Szalay, Alexander S; Vogelstein, Joshua T; Vogelstein, R Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes - neural connectivity maps of the brain-using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems-reads to parallel disk arrays and writes to solid-state storage-to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization.

  1. Maximizing and customer loyalty: Are maximizers less loyal?

    Directory of Open Access Journals (Sweden)

    Linda Lai

    2011-06-01

    Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.

  2. Implications of maximal Jarlskog invariant and maximal CP violation

    International Nuclear Information System (INIS)

    Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico

    2001-04-01

    We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  3. Phenomenology of maximal and near-maximal lepton mixing

    International Nuclear Information System (INIS)

    Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay

  4. High throughput salt separation from uranium deposits

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, S.W.; Park, K.M.; Kim, J.G.; Kim, I.T.; Park, S.B., E-mail: swkwon@kaeri.re.kr [Korea Atomic Energy Research Inst. (Korea, Republic of)

    2014-07-01

    It is very important to increase the throughput of the salt separation system owing to the high uranium content of spent nuclear fuel and high salt fraction of uranium dendrites in pyroprocessing. Multilayer porous crucible system was proposed to increase a throughput of the salt distiller in this study. An integrated sieve-crucible assembly was also investigated for the practical use of the porous crucible system. The salt evaporation behaviors were compared between the conventional nonporous crucible and the porous crucible. Two step weight reductions took place in the porous crucible, whereas the salt weight reduced only at high temperature by distillation in a nonporous crucible. The first weight reduction in the porous crucible was caused by the liquid salt penetrated out through the perforated crucible during the temperature elevation until the distillation temperature. Multilayer porous crucibles have a benefit to expand the evaporation surface area. (author)

  5. Maximal quantum Fisher information matrix

    International Nuclear Information System (INIS)

    Chen, Yu; Yuan, Haidong

    2017-01-01

    We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)

  6. Capacity Maximizing Constellations

    Science.gov (United States)

    Barsoum, Maged; Jones, Christopher

    2010-01-01

    Some non-traditional signal constellations have been proposed for transmission of data over the Additive White Gaussian Noise (AWGN) channel using such channel-capacity-approaching codes as low-density parity-check (LDPC) or turbo codes. Computational simulations have shown performance gains of more than 1 dB over traditional constellations. These gains could be translated to bandwidth- efficient communications, variously, over longer distances, using less power, or using smaller antennas. The proposed constellations have been used in a bit-interleaved coded modulation system employing state-ofthe-art LDPC codes. In computational simulations, these constellations were shown to afford performance gains over traditional constellations as predicted by the gap between the parallel decoding capacity of the constellations and the Gaussian capacity

  7. Sum rate maximization in the uplink of multi-cell OFDMA networks

    KAUST Repository

    Tabassum, Hina

    2012-10-03

    Resource allocation in orthogonal frequency division multiple access (OFDMA) networks plays an imperative role to guarantee the system performance. However, most of the known resource allocation schemes are focused on maximizing the local throughput of each cell, while ignoring the significant effect of inter-cell interference. This paper investigates the problem of resource allocation (i.e., subcarriers and powers) in the uplink of a multi-cell OFDMA network. The problem has a non-convex combinatorial structure and is known to be NP hard. Firstly, we investigate the upper and lower bounds to the average network throughput due to the inherent complexity of implementing the optimal solution. Later, a centralized sub-optimal resource allocation scheme is developed. We further develop less complex centralized and distributed schemes that are well-suited for practical scenarios. The computational complexity of all schemes has been analyzed and the performance is compared through numerical simulations. Simulation results demonstrate that the distributed scheme achieves comparable performance to the centralized resource allocation scheme in various scenarios. © 2011 IEEE.

  8. Maximize x(a - x)

    Science.gov (United States)

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  9. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....

  10. Development of a high-throughput in vitro assay using a novel Caco-2/rat hepatocyte system for the prediction of oral plasma area under the concentration versus time curve (AUC) in rats.

    Science.gov (United States)

    Cheng, K-C; Li, Cheng; Hsieh, Yunsheng; Montgomery, Diana; Liu, Tongtong; White, Ronald E

    2006-01-01

    Previously, we have shown that a novel Caco-2/human hepatocyte system is a useful model for the prediction of oral bioavailability in humans. In this study, we attempted to use a similar system in a high-throughput screening mode for the selection of new compound entities (NCE) in drug discovery. A total of 72 compounds randomly selected from three different chemotypes were dosed orally in rats. In vivo plasma area under the concentration versus time curve (AUC) from 0-6 h of the parent compound was determined. The same compounds were also tested in the Caco-2/rat hepatocyte system. In vitro AUC from 0-3 h in the Caco-2 rat hepatocyte system was determined. The predictive usefulness of the Caco-2/rat hepatocyte system was evaluated by comparing the in vivo plasma AUC and the in vitro AUC. Linear regression analysis showed a reasonable correlation (R2 = 0.5) between the in vivo AUC and the in vitro AUC. Using 0.4 microM h in vivo AUC as a cut-off, compounds were categorized as either low or high AUC. The in vitro AUC successfully matched the corresponding in vivo category for sixty-three out of seventy-two compounds. The results presented in this study suggest that the Caco-2/rat hepatocyte system may be used as a high-throughput screen in drug discovery for pharmacokinetic behaviors of compounds in rats.

  11. The Open Connectome Project Data Cluster: Scalable Analysis and Vision for High-Throughput Neuroscience

    Science.gov (United States)

    Burns, Randal; Roncal, William Gray; Kleissas, Dean; Lillaney, Kunal; Manavalan, Priya; Perlman, Eric; Berger, Daniel R.; Bock, Davi D.; Chung, Kwanghun; Grosenick, Logan; Kasthuri, Narayanan; Weiler, Nicholas C.; Deisseroth, Karl; Kazhdan, Michael; Lichtman, Jeff; Reid, R. Clay; Smith, Stephen J.; Szalay, Alexander S.; Vogelstein, Joshua T.; Vogelstein, R. Jacob

    2013-01-01

    We describe a scalable database cluster for the spatial analysis and annotation of high-throughput brain imaging data, initially for 3-d electron microscopy image stacks, but for time-series and multi-channel data as well. The system was designed primarily for workloads that build connectomes— neural connectivity maps of the brain—using the parallel execution of computer vision algorithms on high-performance compute clusters. These services and open-science data sets are publicly available at openconnecto.me. The system design inherits much from NoSQL scale-out and data-intensive computing architectures. We distribute data to cluster nodes by partitioning a spatial index. We direct I/O to different systems—reads to parallel disk arrays and writes to solid-state storage—to avoid I/O interference and maximize throughput. All programming interfaces are RESTful Web services, which are simple and stateless, improving scalability and usability. We include a performance evaluation of the production system, highlighting the effec-tiveness of spatial data organization. PMID:24401992

  12. MXIbus data throughput tests

    International Nuclear Information System (INIS)

    Botlo, M.; Dunning, J.; Jagieski, M.; Miller, L.; Romero, A.

    1992-11-01

    A series of tests were conducted to evaluate data transfer rates using the MXIbus architecture. The tests were conducted by the DAQ group in the Physics Research Division. The MXIbus from National Instruments provides a multisystem extension interface bus. It allows multiple VME chassis to be networked. Other bus architectures that can participate in the network include VXIbus, IBM PC-AT bus, Sun Sbus, Mac NuBus and stand-alone instruments with the appropriate MXIbus adapter cards. From a functional standpoint the MXIbus provides the capability to enlarge the address space in a fashion that is transparent to the software application. The tests were designed to measure data throughput when using the MSIbus with other industry off-the-shelf hardware. This report contains discussions on: MXIbus architecture and general guidelines; the commercial hardware and software used in each set of tests; and a brief description of each set of tests, observations and guidelines; the commercial hardware and software used in each set of tests; and a brief description of each set of tests, observations and conclusions

  13. On the maximal diphoton width

    CERN Document Server

    Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo

    2016-01-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  14. Implicit Consensus: Blockchain with Unbounded Throughput

    OpenAIRE

    Ren, Zhijie; Cong, Kelong; Pouwelse, Johan; Erkin, Zekeriya

    2017-01-01

    Recently, the blockchain technique was put in the spotlight as it introduced a systematic approach for multiple parties to reach consensus without needing trust. However, the application of this technique in practice is severely restricted due to its limitations in throughput. In this paper, we propose a novel consensus model, namely the implicit consensus, with a distinctive blockchain-based distributed ledger in which each node holds its individual blockchain. In our system, the consensus i...

  15. Maximization

    Directory of Open Access Journals (Sweden)

    A. Garmroodi Asil

    2017-09-01

    To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.

  16. Development of a high-throughput in vitro intestinal lipolysis model for rapid screening of lipid-based drug delivery systems

    DEFF Research Database (Denmark)

    Mosgaard, Mette D; Sassene, Philip; Mu, Huiling

    2015-01-01

    : The HTP model is able to predict drug distribution during digestion of LbDDS containing poorly water soluble drugs in the same manner as the DIVL model. Thus the HTP model might prove applicable for high-throughput evaluation of LbDDS in e.g. 96 well plates or small scale dissolution equipment....... (DIVL) model with regard to the extent of lipid digestion and drug distribution of two poorly soluble model drugs (cinnarizine and danazol), during digestion of three LbDDS (LbDDS I-III). RESULT: The HTP model was able to maintain pH around 6.5 during digestion, without the addition of Na...

  17. Chamaebatiaria millefolium (Torr.) Maxim.: fernbush

    Science.gov (United States)

    Nancy L. Shaw; Emerenciana G. Hurd

    2008-01-01

    Fernbush - Chamaebatiaria millefolium (Torr.) Maxim. - the only species in its genus, is endemic to the Great Basin, Colorado Plateau, and adjacent areas of the western United States. It is an upright, generally multistemmed, sweetly aromatic shrub 0.3 to 2 m tall. Bark of young branches is brown and becomes smooth and gray with age. Leaves are leathery, alternate,...

  18. Optimal throughput for cognitive radio with energy harvesting in fading wireless channel.

    Science.gov (United States)

    Vu-Van, Hiep; Koo, Insoo

    2014-01-01

    Energy resource management is a crucial problem of a device with a finite capacity battery. In this paper, cognitive radio is considered to be a device with an energy harvester that can harvest energy from a non-RF energy resource while performing other actions of cognitive radio. Harvested energy will be stored in a finite capacity battery. At the start of the time slot of cognitive radio, the radio needs to determine if it should remain silent or carry out spectrum sensing based on the idle probability of the primary user and the remaining energy in order to maximize the throughput of the cognitive radio system. In addition, optimal sensing energy and adaptive transmission power control are also investigated in this paper to effectively utilize the limited energy of cognitive radio. Finding an optimal approach is formulated as a partially observable Markov decision process. The simulation results show that the proposed optimal decision scheme outperforms the myopic scheme in which current throughput is only considered when making a decision.

  19. Numerical performance and throughput benchmark for electronic structure calculations in PC-Linux systems with new architectures, updated compilers, and libraries.

    Science.gov (United States)

    Yu, Jen-Shiang K; Hwang, Jenn-Kang; Tang, Chuan Yi; Yu, Chin-Hui

    2004-01-01

    A number of recently released numerical libraries including Automatically Tuned Linear Algebra Subroutines (ATLAS) library, Intel Math Kernel Library (MKL), GOTO numerical library, and AMD Core Math Library (ACML) for AMD Opteron processors, are linked against the executables of the Gaussian 98 electronic structure calculation package, which is compiled by updated versions of Fortran compilers such as Intel Fortran compiler (ifc/efc) 7.1 and PGI Fortran compiler (pgf77/pgf90) 5.0. The ifc 7.1 delivers about 3% of improvement on 32-bit machines compared to the former version 6.0. Performance improved from pgf77 3.3 to 5.0 is also around 3% when utilizing the original unmodified optimization options of the compiler enclosed in the software. Nevertheless, if extensive compiler tuning options are used, the speed can be further accelerated to about 25%. The performances of these fully optimized numerical libraries are similar. The double-precision floating-point (FP) instruction sets (SSE2) are also functional on AMD Opteron processors operated in 32-bit compilation, and Intel Fortran compiler has performed better optimization. Hardware-level tuning is able to improve memory bandwidth by adjusting the DRAM timing, and the efficiency in the CL2 mode is further accelerated by 2.6% compared to that of the CL2.5 mode. The FP throughput is measured by simultaneous execution of two identical copies of each of the test jobs. Resultant performance impact suggests that IA64 and AMD64 architectures are able to fulfill significantly higher throughput than the IA32, which is consistent with the SpecFPrate2000 benchmarks.

  20. High Throughput Transcriptomics @ USEPA (Toxicology ...

    Science.gov (United States)

    The ideal chemical testing approach will provide complete coverage of all relevant toxicological responses. It should be sensitive and specific It should identify the mechanism/mode-of-action (with dose-dependence). It should identify responses relevant to the species of interest. Responses should ideally be translated into tissue-, organ-, and organism-level effects. It must be economical and scalable. Using a High Throughput Transcriptomics platform within US EPA provides broader coverage of biological activity space and toxicological MOAs and helps fill the toxicological data gap. Slide presentation at the 2016 ToxForum on using High Throughput Transcriptomics at US EPA for broader coverage biological activity space and toxicological MOAs.

  1. IMNN: Information Maximizing Neural Networks

    Science.gov (United States)

    Charnock, Tom; Lavaux, Guilhem; Wandelt, Benjamin D.

    2018-04-01

    This software trains artificial neural networks to find non-linear functionals of data that maximize Fisher information: information maximizing neural networks (IMNNs). As compressing large data sets vastly simplifies both frequentist and Bayesian inference, important information may be inadvertently missed. Likelihood-free inference based on automatically derived IMNN summaries produces summaries that are good approximations to sufficient statistics. IMNNs are robustly capable of automatically finding optimal, non-linear summaries of the data even in cases where linear compression fails: inferring the variance of Gaussian signal in the presence of noise, inferring cosmological parameters from mock simulations of the Lyman-α forest in quasar spectra, and inferring frequency-domain parameters from LISA-like detections of gravitational waveforms. In this final case, the IMNN summary outperforms linear data compression by avoiding the introduction of spurious likelihood maxima.

  2. Is the β phase maximal?

    International Nuclear Information System (INIS)

    Ferrandis, Javier

    2005-01-01

    The current experimental determination of the absolute values of the CKM elements indicates that 2 vertical bar V ub /V cb V us vertical bar =(1-z), with z given by z=0.19+/-0.14. This fact implies that irrespective of the form of the quark Yukawa matrices, the measured value of the SM CP phase β is approximately the maximum allowed by the measured absolute values of the CKM elements. This is β=(π/6-z/3) for γ=(π/3+z/3), which implies α=π/2. Alternatively, assuming that β is exactly maximal and using the experimental measurement sin(2β)=0.726+/-0.037, the phase γ is predicted to be γ=(π/2-β)=66.3 o +/-1.7 o . The maximality of β, if confirmed by near-future experiments, may give us some clues as to the origin of CP violation

  3. Strategy to maximize maintenance operation

    OpenAIRE

    Espinoza, Michael

    2005-01-01

    This project presents a strategic analysis to maximize maintenance operations in Alcan Kitimat Works in British Columbia. The project studies the role of maintenance in improving its overall maintenance performance. It provides strategic alternatives and specific recommendations addressing Kitimat Works key strategic issues and problems. A comprehensive industry and competitive analysis identifies the industry structure and its competitive forces. In the mature aluminium industry, the bargain...

  4. Scalable Nonlinear AUC Maximization Methods

    OpenAIRE

    Khalid, Majdi; Ray, Indrakshi; Chitsaz, Hamidreza

    2017-01-01

    The area under the ROC curve (AUC) is a measure of interest in various machine learning and data mining applications. It has been widely used to evaluate classification performance on heavily imbalanced data. The kernelized AUC maximization machines have established a superior generalization ability compared to linear AUC machines because of their capability in modeling the complex nonlinear structure underlying most real world-data. However, the high training complexity renders the kernelize...

  5. Throughput maximization for buffer-aided hybrid half-/full-duplex relaying with self-interference

    KAUST Repository

    Khafagy, Mohammad Galal; El Shafie, Ahmed; Salem, Ahmed Sultan; Alouini, Mohamed-Slim

    2015-01-01

    is efficiently obtained via standard convex/linear numerical optimization tools. Finally, the theoretical findings are corroborated with event-based simulations to provide the necessary performance validation.

  6. Adjusting Sensing Range to Maximize Throughput on Ad-Hoc Multi-Hop Wireless Networks

    National Research Council Canada - National Science Library

    Roberts, Christopher

    2003-01-01

    .... Such a network is referred to as a multi-hop ad-hoc network, or simply a multi-hop network. Most multi-hop network protocols use some form of carrier sensing to determine if the wireless channel is in use...

  7. How do I manage and staff for intelligent transportation systems? : thinking outside the box : a cross-cutting study : maximizing project resources and advancing coordination

    Science.gov (United States)

    2000-08-01

    Intelligent transportation systems (ITS) projects often need staff with skills that are not resident in traditional transportation organizations. Therefore, project administrators must sometimes look beyond the usual staffing methods to fill these po...

  8. FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION

    Directory of Open Access Journals (Sweden)

    Rahmawati Sukmaningrum

    2017-04-01

    Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.

  9. Single maximal versus combination punch kinematics.

    Science.gov (United States)

    Piorkowski, Barry A; Lees, Adrian; Barton, Gabor J

    2011-03-01

    The aim of this study was to determine the influence of punch type (Jab, Cross, Lead Hook and Reverse Hook) and punch modality (Single maximal, 'In-synch' and 'Out of synch' combination) on punch speed and delivery time. Ten competition-standard volunteers performed punches with markers placed on their anatomical landmarks for 3D motion capture with an eight-camera optoelectronic system. Speed and duration between key moments were computed. There were significant differences in contact speed between punch types (F(2,18,84.87) = 105.76, p = 0.001) with Lead and Reverse Hooks developing greater speed than Jab and Cross. There were significant differences in contact speed between punch modalities (F(2,64,102.87) = 23.52, p = 0.001) with the Single maximal (M+/- SD: 9.26 +/- 2.09 m/s) higher than 'Out of synch' (7.49 +/- 2.32 m/s), 'In-synch' left (8.01 +/- 2.35 m/s) or right lead (7.97 +/- 2.53 m/s). Delivery times were significantly lower for Jab and Cross than Hook. Times were significantly lower 'In-synch' than a Single maximal or 'Out of synch' combination mode. It is concluded that a defender may have more evasion-time than previously reported. This research could be of use to performers and coaches when considering training preparations.

  10. Formation Control for the MAXIM Mission

    Science.gov (United States)

    Luquette, Richard J.; Leitner, Jesse; Gendreau, Keith; Sanner, Robert M.

    2004-01-01

    Over the next twenty years, a wave of change is occurring in the space-based scientific remote sensing community. While the fundamental limits in the spatial and angular resolution achievable in spacecraft have been reached, based on today s technology, an expansive new technology base has appeared over the past decade in the area of Distributed Space Systems (DSS). A key subset of the DSS technology area is that which covers precision formation flying of space vehicles. Through precision formation flying, the baselines, previously defined by the largest monolithic structure which could fit in the largest launch vehicle fairing, are now virtually unlimited. Several missions including the Micro-Arcsecond X-ray Imaging Mission (MAXIM), and the Stellar Imager will drive the formation flying challenges to achieve unprecedented baselines for high resolution, extended-scene, interferometry in the ultraviolet and X-ray regimes. This paper focuses on establishing the feasibility for the formation control of the MAXIM mission. MAXIM formation flying requirements are on the order of microns, while Stellar Imager mission requirements are on the order of nanometers. This paper specifically addresses: (1) high-level science requirements for these missions and how they evolve into engineering requirements; and (2) the development of linearized equations of relative motion for a formation operating in an n-body gravitational field. Linearized equations of motion provide the ground work for linear formation control designs.

  11. Application of Chemical Genomics to Plant-Bacteria Communication: A High-Throughput System to Identify Novel Molecules Modulating the Induction of Bacterial Virulence Genes by Plant Signals.

    Science.gov (United States)

    Vandelle, Elodie; Puttilli, Maria Rita; Chini, Andrea; Devescovi, Giulia; Venturi, Vittorio; Polverari, Annalisa

    2017-01-01

    The life cycle of bacterial phytopathogens consists of a benign epiphytic phase, during which the bacteria grow in the soil or on the plant surface, and a virulent endophytic phase involving the penetration of host defenses and the colonization of plant tissues. Innovative strategies are urgently required to integrate copper treatments that control the epiphytic phase with complementary tools that control the virulent endophytic phase, thus reducing the quantity of chemicals applied to economically and ecologically acceptable levels. Such strategies include targeted treatments that weaken bacterial pathogens, particularly those inhibiting early infection steps rather than tackling established infections. This chapter describes a reporter gene-based chemical genomic high-throughput screen for the induction of bacterial virulence by plant molecules. Specifically, we describe a chemical genomic screening method to identify agonist and antagonist molecules for the induction of targeted bacterial virulence genes by plant extracts, focusing on the experimental controls required to avoid false positives and thus ensuring the results are reliable and reproducible.

  12. Washout and non-washout solutions of a system describing microbial fermentation process under the influence of growth inhibitions and maximal concentration of yeast cells.

    Science.gov (United States)

    Kasbawati; Gunawan, Agus Yodi; Sidarto, Kuntjoro Adjie

    2017-07-01

    An unstructured model for the growth of yeast cell on glucose due to growth inhibitions by substrate, products, and cell density is discussed. The proposed model describes the dynamical behavior of fermentation system that shows multiple steady states for a certain regime of operating parameters such as inlet glucose and dilution rate. Two types of steady state solutions are found, namely washout and non-washout solutions. Furthermore, different numerical impositions to the two parameters put in evidence three results regarding non-washout solution: a unique locally stable non-washout solution, a unique locally stable non-washout solution towards which other nearby solutions exhibit damped oscillations, and multiple non-washout solutions where one is locally stable while the other is unstable. It is also found an optimal inlet glucose which produces the highest cell and ethanol concentration. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Plug-in hybrid electric vehicles as a way to maximize the integration of variable renewable energy in power systems: The case of wind generation in northeastern Brazil

    International Nuclear Information System (INIS)

    Soares MC Borba, Bruno; Szklo, Alexandre; Schaeffer, Roberto

    2012-01-01

    Several studies have proposed different tools for analyzing the integration of variable renewable energy into power grids. This study applies an optimization tool to model the expansion of the electric power system in northeastern Brazil, enabling the most efficient dispatch of the variable output of the wind farms that will be built in the region over the next 20 years. The expected combined expansion of wind generation with conventional inflexible generation facilities, such as nuclear plants and run-of-the-river hydropower plants, poses risks of future mismatch between supply and demand in northeastern Brazil. Therefore, this article evaluates the possibility of using a fleet of plug-in hybrid electric vehicles (PHEVs) to regularize possible energy imbalances. Findings indicate that a dedicated fleet of 500 thousand PHEVs in 2015, and a further 1.5 million in 2030, could be recharged overnight to take advantage of the surplus power generated by wind farms. To avoid the initial costs of smart grids, this article suggests, as a first step, the use of a governmental PHEV fleet that allows fleet managers to control battery charging times. Finally, the study demonstrates the advantages of optimizing simultaneously the power and transport sectors to test the strategy suggested here. -- Highlights: ► We evaluated the use of plug-in hybrid electric vehicles (PHEV) to regularize possible energy imbalances in northeastern Brazil. ► This imbalance might result from the large-scale wind power penetration along with conventional inflexible power plants in the region. ► We adapted the MESSAGE optimization tool to the base conditions of the Brazilian power system. ► 500 thousand PHEVs in 2015 and 1.5 million in 2030 could be recharged taking advantage of wind energy surplus.

  14. A Cross-Layer Approach for Maximizing Visual Entropy Using Closed-Loop Downlink MIMO

    Directory of Open Access Journals (Sweden)

    Hyungkeuk Lee

    2008-07-01

    Full Text Available We propose an adaptive video transmission scheme to achieve unequal error protection in a closed loop multiple input multiple output (MIMO system for wavelet-based video coding. In this scheme, visual entropy is employed as a video quality metric in agreement with the human visual system (HVS, and the associated visual weight is used to obtain a set of optimal powers in the MIMO system for maximizing the visual quality of the reconstructed video. For ease of cross-layer optimization, the video sequence is divided into several streams, and the visual importance of each stream is quantified using the visual weight. Moreover, an adaptive load balance control, named equal termination scheduling (ETS, is proposed to improve the throughput of visually important data with higher priority. An optimal solution for power allocation is derived as a closed form using a Lagrangian relaxation method. In the simulation results, a highly improved visual quality is demonstrated in the reconstructed video via the cross-layer approach by means of visual entropy.

  15. Maximally reliable Markov chains under energy constraints.

    Science.gov (United States)

    Escola, Sean; Eisele, Michael; Miller, Kenneth; Paninski, Liam

    2009-07-01

    Signal-to-noise ratios in physical systems can be significantly degraded if the outputs of the systems are highly variable. Biological processes for which highly stereotyped signal generations are necessary features appear to have reduced their signal variabilities by employing multiple processing steps. To better understand why this multistep cascade structure might be desirable, we prove that the reliability of a signal generated by a multistate system with no memory (i.e., a Markov chain) is maximal if and only if the system topology is such that the process steps irreversibly through each state, with transition rates chosen such that an equal fraction of the total signal is generated in each state. Furthermore, our result indicates that by increasing the number of states, it is possible to arbitrarily increase the reliability of the system. In a physical system, however, an energy cost is associated with maintaining irreversible transitions, and this cost increases with the number of such transitions (i.e., the number of states). Thus, an infinite-length chain, which would be perfectly reliable, is infeasible. To model the effects of energy demands on the maximally reliable solution, we numerically optimize the topology under two distinct energy functions that penalize either irreversible transitions or incommunicability between states, respectively. In both cases, the solutions are essentially irreversible linear chains, but with upper bounds on the number of states set by the amount of available energy. We therefore conclude that a physical system for which signal reliability is important should employ a linear architecture, with the number of states (and thus the reliability) determined by the intrinsic energy constraints of the system.

  16. Optimal quantum error correcting codes from absolutely maximally entangled states

    Science.gov (United States)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  17. Effects of habitat disturbance on the pollination system of Ammopiptanthus mongolicus (Maxim) Cheng f. at the landscape-level in an arid region of Northwest China.

    Science.gov (United States)

    Chen, Min; Zhao, Xue-Yong; Zuo, Xiao-An; Mao, Wei; Qu, Hao; Zhu, Yang-Chun

    2016-05-01

    Ammopiptanthus mongolicus is an ecologically important species in the arid region of Northwest China. Habitat disturbance can significantly affect plant mating success and ultimately species viability. Pollen limitation of plant reproduction occurs in many plant species, particularly those under habitat disturbance. However, previous investigations have demonstrated differences in pollen limitation between conserved and disturbed sites. We compared the phenology, pollen limitation, pollinators and breeding system of both sites to determine whether habitat disturbance has generated changes in these plant components. We found that the species differed in four aspects. First, blooming duration and flowering peak were longer in the disturbed site than in the conserved site. Second, A. mongolicus can be pollen-limited and pollen limitation was more intense in the conserved site than in the disturbed site. Third, Anthophora uljanini was found to be a frequent pollinator in the conserved site, while Apis mellifera was the most effective and frequent flower visitor. More pollinator visits were recorded in the disturbed site, which could explain the differences in reproductive success. Finally, seed set was higher in the disturbed site than in the conserved site. We found that outcrossing was dominant in both sites and that agamospermy and self-pollination played complementary roles to ensure reproduction. Differences in flower production influenced by artificial selection and pollinator type explain the different seed set in both sites, whereas habitat disturbance cause changes differences in the pollination process and limits pollen flow. The balance between artificial management and mating success is crucial to analysis of the pollination process and manipulation of A. mongolicus population size.

  18. The T2-Shortening Effect of Gadolinium and the Optimal Conditions for Maximizing the CNR for Evaluating the Biliary System: a Phantom Study

    International Nuclear Information System (INIS)

    Lee, Mi Jung; Kim, Myung Joon; Yoon, Choon Sik; Song, Si Young; Park, Kyung Soo; Kim, Woo Sun

    2011-01-01

    Clear depiction of the common bile duct is important when evaluating neonatal cholestasis in order to differentiate biliary atresia from other diseases. During MR cholangiopancreatography, the T2-shortening effect of gadolinium can increase the contrast-to-noise ratio (CNR) of the bile duct and enhance its depiction. The purpose of this study was to confirm, by performing a phantom study, the T2-shortening effect of gadolinium, to evaluate the effect of different gadolinium chelates with different gadolinium concentrations and different magnetic field strengths for investigating the optimal combination of these conditions, and for identifying the maximum CNR for the evaluation of the biliary system. MR imaging using a T2-weighted single-shot fast spin echo sequence and T2 relaxometry was performed with a sponge phantom in a syringe tube. Two kinds of contrast agents (Gd-DTPA and Gd-EOB-DTPA) with different gadolinium concentrations were evaluated with 1.5T and 3T scanners. The signal intensities, the CNRs and the T2 relaxation time were analyzed. The signal intensities significantly decreased as the gadolinium concentrations increased (p < 0.001) with both contrast agents. These signal intensities were higher on a 3T (p < 0.001) scanner. The CNRs were higher on a 1.5T (p < 0.001) scanner and they showed no significant change with different gadolinium concentrations. The T2 relaxation time also showed a negative correlation with the gadolinium concentrations (p < 0.001) and the CNRs showed decrease more with Gd-EOB-DTPA (versus Gd-DTPA; p < 0.001) on a 3T scanner (versus 1.5T; p < 0.001). A T2-shortening effect of gadolinium exhibits a negative correlation with the gadolinium concentration for both the signal intensities and the T2 relaxation time. A higher CNR can be obtained with Gd-DTPA on a 1.5T MRI scanner.

  19. Quench dynamics of topological maximally entangled states.

    Science.gov (United States)

    Chung, Ming-Chiang; Jhu, Yi-Hao; Chen, Pochung; Mou, Chung-Yu

    2013-07-17

    We investigate the quench dynamics of the one-particle entanglement spectra (OPES) for systems with topologically nontrivial phases. By using dimerized chains as an example, it is demonstrated that the evolution of OPES for the quenched bipartite systems is governed by an effective Hamiltonian which is characterized by a pseudospin in a time-dependent pseudomagnetic field S(k,t). The existence and evolution of the topological maximally entangled states (tMESs) are determined by the winding number of S(k,t) in the k-space. In particular, the tMESs survive only if nontrivial Berry phases are induced by the winding of S(k,t). In the infinite-time limit the equilibrium OPES can be determined by an effective time-independent pseudomagnetic field Seff(k). Furthermore, when tMESs are unstable, they are destroyed by quasiparticles within a characteristic timescale in proportion to the system size.

  20. Maximizing benefits from resource development

    International Nuclear Information System (INIS)

    Skjelbred, B.

    2002-01-01

    The main objectives of Norwegian petroleum policy are to maximize the value creation for the country, develop a national oil and gas industry, and to be at the environmental forefront of long term resource management and coexistence with other industries. The paper presents a graph depicting production and net export of crude oil for countries around the world for 2002. Norway produced 3.41 mill b/d and exported 3.22 mill b/d. Norwegian petroleum policy measures include effective regulation and government ownership, research and technology development, and internationalisation. Research and development has been in five priority areas, including enhanced recovery, environmental protection, deep water recovery, small fields, and the gas value chain. The benefits of internationalisation includes capitalizing on Norwegian competency, exploiting emerging markets and the assurance of long-term value creation and employment. 5 figs

  1. Maximizing synchronizability of duplex networks

    Science.gov (United States)

    Wei, Xiang; Emenheiser, Jeffrey; Wu, Xiaoqun; Lu, Jun-an; D'Souza, Raissa M.

    2018-01-01

    We study the synchronizability of duplex networks formed by two randomly generated network layers with different patterns of interlayer node connections. According to the master stability function, we use the smallest nonzero eigenvalue and the eigenratio between the largest and the second smallest eigenvalues of supra-Laplacian matrices to characterize synchronizability on various duplexes. We find that the interlayer linking weight and linking fraction have a profound impact on synchronizability of duplex networks. The increasingly large inter-layer coupling weight is found to cause either decreasing or constant synchronizability for different classes of network dynamics. In addition, negative node degree correlation across interlayer links outperforms positive degree correlation when most interlayer links are present. The reverse is true when a few interlayer links are present. The numerical results and understanding based on these representative duplex networks are illustrative and instructive for building insights into maximizing synchronizability of more realistic multiplex networks.

  2. Power Converters Maximize Outputs Of Solar Cell Strings

    Science.gov (United States)

    Frederick, Martin E.; Jermakian, Joel B.

    1993-01-01

    Microprocessor-controlled dc-to-dc power converters devised to maximize power transferred from solar photovoltaic strings to storage batteries and other electrical loads. Converters help in utilizing large solar photovoltaic arrays most effectively with respect to cost, size, and weight. Main points of invention are: single controller used to control and optimize any number of "dumb" tracker units and strings independently; power maximized out of converters; and controller in system is microprocessor.

  3. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  4. Gain maximization in a probabilistic entanglement protocol

    Science.gov (United States)

    di Lorenzo, Antonio; Esteves de Queiroz, Johnny Hebert

    Entanglement is a resource. We can therefore define gain as a monotonic function of entanglement G (E) . If a pair with entanglement E is produced with probability P, the net gain is N = PG (E) - (1 - P) C , where C is the cost of a failed attempt. We study a protocol where a pair of quantum systems is produced in a maximally entangled state ρm with probability Pm, while it is produced in a partially entangled state ρp with the complementary probability 1 -Pm . We mix a fraction w of the partially entangled pairs with the maximally entangled ones, i.e. we take the state to be ρ = (ρm + wUlocρpUloc+) / (1 + w) , where Uloc is an appropriate unitary local operation designed to maximize the entanglement of ρ. This procedure on one hand reduces the entanglement E, and hence the gain, but on the other hand it increases the probability of success to P =Pm + w (1 -Pm) , therefore the net gain N may increase. There may be hence, a priori, an optimal value for w, the fraction of failed attempts that we mix in. We show that, in the hypothesis of a linear gain G (E) = E , even assuming a vanishing cost C -> 0 , the net gain N is increasing with w, therefore the best strategy is to always mix the partially entangled states. Work supported by CNPq, Conselho Nacional de Desenvolvimento Científico e Tecnológico, proc. 311288/2014-6, and by FAPEMIG, Fundação de Amparo à Pesquisa de Minas Gerais, proc. IC-FAPEMIG2016-0269 and PPM-00607-16.

  5. Sum Rate Maximization of D2D Communications in Cognitive Radio Network Using Cheating Strategy

    Directory of Open Access Journals (Sweden)

    Yanjing Sun

    2018-01-01

    Full Text Available This paper focuses on the cheating algorithm for device-to-device (D2D pairs that reuse the uplink channels of cellular users. We are concerned about the way how D2D pairs are matched with cellular users (CUs to maximize their sum rate. In contrast with Munkres’ algorithm which gives the optimal matching in terms of the maximum throughput, Gale-Shapley algorithm ensures the stability of the system on the same time and achieves a men-optimal stable matching. In our system, D2D pairs play the role of “men,” so that each D2D pair could be matched to the CU that ranks as high as possible in the D2D pair’s preference list. It is found by previous studies that, by unilaterally falsifying preference lists in a particular way, some men can get better partners, while no men get worse off. We utilize this theory to exploit the best cheating strategy for D2D pairs. We find out that to acquire such a cheating strategy, we need to seek as many and as large cabals as possible. To this end, we develop a cabal finding algorithm named RHSTLC, and also we prove that it reaches the Pareto optimality. In comparison with other algorithms proposed by related works, the results show that our algorithm can considerably improve the sum rate of D2D pairs.

  6. Maximizing System Lifetime by Battery Scheduling

    NARCIS (Netherlands)

    Jongerden, M.R.; Haverkort, Boudewijn R.H.M.; Bohnenkamp, H.C.; Katoen, Joost P.

    2009-01-01

    The use of mobile devices is limited by the battery lifetime. Some devices have the option to connect an extra battery, or to use smart battery-packs with multiple cells to extend the lifetime. In these cases, scheduling the batteries over the load to exploit recovery properties usually extends the

  7. Towards a high throughput droplet-based agglutination assay

    KAUST Repository

    Kodzius, Rimantas; Castro, David; Foulds, Ian G.

    2013-01-01

    This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.

  8. Towards a high throughput droplet-based agglutination assay

    KAUST Repository

    Kodzius, Rimantas

    2013-10-22

    This work demonstrates the detection method for a high throughput droplet based agglutination assay system. Using simple hydrodynamic forces to mix and aggregate functionalized microbeads we avoid the need to use magnetic assistance or mixing structures. The concentration of our target molecules was estimated by agglutination strength, obtained through optical image analysis. Agglutination in droplets was performed with flow rates of 150 µl/min and occurred in under a minute, with potential to perform high-throughput measurements. The lowest target concentration detected in droplet microfluidics was 0.17 nM, which is three orders of magnitude more sensitive than a conventional card based agglutination assay.

  9. Low-cost guaranteed-throughput communication ring for real-time streaming MPSoCs

    NARCIS (Netherlands)

    Dekens, B.H.J.; Kurtin, Philip Sebastian; Bekooij, Marco Jan Gerrit; Smit, Gerardus Johannes Maria

    2013-01-01

    Connection-oriented guaranteed-throughput mesh-based networks on chip have been proposed as a replacement for buses in real-time embedded multiprocessor systems such as software defined radios. Even with attractive features like throughput and latency guarantees they are not always used because

  10. Improving throughput of single-relay DF channel using linear constellation precoding

    KAUST Repository

    Fareed, Muhammad Mehboob

    2014-08-01

    In this letter, we propose a transmission scheme to improve the overall throughput of a cooperative communication system with single decode-and-forward relay. Symbol error rate and throughput analysis of the new scheme are presented to facilitate the performance comparison with the existing decode-and-forward relaying schemes. Simulation results are further provided to corroborate the analytical results. © 2012 IEEE.

  11. Improving throughput of single-relay DF channel using linear constellation precoding

    KAUST Repository

    Fareed, Muhammad Mehboob; Yang, Hongchuan; Alouini, Mohamed-Slim

    2014-01-01

    In this letter, we propose a transmission scheme to improve the overall throughput of a cooperative communication system with single decode-and-forward relay. Symbol error rate and throughput analysis of the new scheme are presented to facilitate the performance comparison with the existing decode-and-forward relaying schemes. Simulation results are further provided to corroborate the analytical results. © 2012 IEEE.

  12. The high throughput biomedicine unit at the institute for molecular medicine Finland: high throughput screening meets precision medicine.

    Science.gov (United States)

    Pietiainen, Vilja; Saarela, Jani; von Schantz, Carina; Turunen, Laura; Ostling, Paivi; Wennerberg, Krister

    2014-05-01

    The High Throughput Biomedicine (HTB) unit at the Institute for Molecular Medicine Finland FIMM was established in 2010 to serve as a national and international academic screening unit providing access to state of the art instrumentation for chemical and RNAi-based high throughput screening. The initial focus of the unit was multiwell plate based chemical screening and high content microarray-based siRNA screening. However, over the first four years of operation, the unit has moved to a more flexible service platform where both chemical and siRNA screening is performed at different scales primarily in multiwell plate-based assays with a wide range of readout possibilities with a focus on ultraminiaturization to allow for affordable screening for the academic users. In addition to high throughput screening, the equipment of the unit is also used to support miniaturized, multiplexed and high throughput applications for other types of research such as genomics, sequencing and biobanking operations. Importantly, with the translational research goals at FIMM, an increasing part of the operations at the HTB unit is being focused on high throughput systems biological platforms for functional profiling of patient cells in personalized and precision medicine projects.

  13. High throughput protein production screening

    Science.gov (United States)

    Beernink, Peter T [Walnut Creek, CA; Coleman, Matthew A [Oakland, CA; Segelke, Brent W [San Ramon, CA

    2009-09-08

    Methods, compositions, and kits for the cell-free production and analysis of proteins are provided. The invention allows for the production of proteins from prokaryotic sequences or eukaryotic sequences, including human cDNAs using PCR and IVT methods and detecting the proteins through fluorescence or immunoblot techniques. This invention can be used to identify optimized PCR and WT conditions, codon usages and mutations. The methods are readily automated and can be used for high throughput analysis of protein expression levels, interactions, and functional states.

  14. Throughput Performance Evaluation of Multiservice Multirate OCDMA in Flexible Networks

    DEFF Research Database (Denmark)

    Raddo, Thiago R.; Sanches, Anderson L.; Tafur Monroy, Idelfonso

    2016-01-01

    in the system. The bit error rate (BER) and packet correct probability expressions are derived, considering the multiple-access interference as binomially distributed. Packet throughput expressions, on the other hand, are derived considering Poisson, binomial, and Markov chain approaches for the composite......, the binomial approach proved to be more straightforward, computationally more efficient, and just as accurate as the Markov chain approach....

  15. Max-plus algebraic throughput analysis of synchronous dataflow graphs

    NARCIS (Netherlands)

    de Groote, Robert; Kuper, Jan; Broersma, Haitze J.; Smit, Gerardus Johannes Maria

    2012-01-01

    In this paper we present a novel approach to throughput analysis of synchronous dataflow (SDF) graphs. Our approach is based on describing the evolution of actor firing times as a linear time-invariant system in max-plus algebra. Experimental results indicate that our approach is faster than

  16. Maximizing the optical network capacity.

    Science.gov (United States)

    Bayvel, Polina; Maher, Robert; Xu, Tianhua; Liga, Gabriele; Shevchenko, Nikita A; Lavery, Domaniç; Alvarado, Alex; Killey, Robert I

    2016-03-06

    Most of the digital data transmitted are carried by optical fibres, forming the great part of the national and international communication infrastructure. The information-carrying capacity of these networks has increased vastly over the past decades through the introduction of wavelength division multiplexing, advanced modulation formats, digital signal processing and improved optical fibre and amplifier technology. These developments sparked the communication revolution and the growth of the Internet, and have created an illusion of infinite capacity being available. But as the volume of data continues to increase, is there a limit to the capacity of an optical fibre communication channel? The optical fibre channel is nonlinear, and the intensity-dependent Kerr nonlinearity limit has been suggested as a fundamental limit to optical fibre capacity. Current research is focused on whether this is the case, and on linear and nonlinear techniques, both optical and electronic, to understand, unlock and maximize the capacity of optical communications in the nonlinear regime. This paper describes some of them and discusses future prospects for success in the quest for capacity. © 2016 The Authors.

  17. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer.

    Science.gov (United States)

    Yu, Hongyan; Zhang, Yongqiang; Guo, Songtao; Yang, Yuanyuan; Ji, Luyue

    2017-08-18

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively.

  18. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer

    Science.gov (United States)

    Yu, Hongyan; Zhang, Yongqiang; Yang, Yuanyuan; Ji, Luyue

    2017-01-01

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively. PMID:28820496

  19. Throughput capacity of the Asbestos Conversion Unit

    International Nuclear Information System (INIS)

    Hyman, M.H.

    1996-10-01

    An engineering assessment is presented for factors that could significantly limit the throughput capacity of the Asbestos Conversion Unit. The assessment focuses mainly on volumetric throughput capacity (and related mass rate and feed density), and energy input. Important conclusions that were reached during this assessment are that the throughput is limited by feed densification capability and that the design energy input rating appears to be adequate

  20. Power Control for Passive QAM Multisensor Backscatter Communication Systems

    Directory of Open Access Journals (Sweden)

    Shengbo Hu

    2017-01-01

    Full Text Available To achieve good quality of service level such as throughput, power control is of great importance to passive quadrature amplitude modulation (QAM multisensor backscatter communication systems. First, we established the RF energy harvesting model and gave the energy condition. In order to minimize the interference of subcarriers and increase the spectral efficiency, then, the colocated passive QAM backscatter communication signal model is presented and the nonlinear optimization problems of power control are solved for passive QAM backscatter communication systems. Solutions include maximum and minimum access interval, the maximum and minimum duty cycle, and the minimal RF-harvested energy under the energy condition for node operating. Using the solutions above, the maximum throughput of passive QAM backscatter communication systems is analyzed and numerical calculation is made finally. Numerical calculation shows that the maximal throughput decreases with the consumed power and the number of sensors, and the maximum throughput is decreased quickly with the increase of the number of sensors. Especially, for a given consumed power of sensor, it can be seen that the throughput decreases with the duty cycle and the number of sensors has little effect on the throughput.

  1. Value maximizing maintenance policies under general repair

    International Nuclear Information System (INIS)

    Marais, Karen B.

    2013-01-01

    One class of maintenance optimization problems considers the notion of general repair maintenance policies where systems are repaired or replaced on failure. In each case the optimality is based on minimizing the total maintenance cost of the system. These cost-centric optimizations ignore the value dimension of maintenance and can lead to maintenance strategies that do not maximize system value. This paper applies these ideas to the general repair optimization problem using a semi-Markov decision process, discounted cash flow techniques, and dynamic programming to identify the value-optimal actions for any given time and system condition. The impact of several parameters on maintenance strategy, such as operating cost and revenue, system failure characteristics, repair and replacement costs, and the planning time horizon, is explored. This approach provides a quantitative basis on which to base maintenance strategy decisions that contribute to system value. These decisions are different from those suggested by traditional cost-based approaches. The results show (1) how the optimal action for a given time and condition changes as replacement and repair costs change, and identifies the point at which these costs become too high for profitable system operation; (2) that for shorter planning horizons it is better to repair, since there is no time to reap the benefits of increased operating profit and reliability; (3) how the value-optimal maintenance policy is affected by the system's failure characteristics, and hence whether it is worthwhile to invest in higher reliability; and (4) the impact of the repair level on the optimal maintenance policy. -- Highlights: •Provides a quantitative basis for maintenance strategy decisions that contribute to system value. •Shows how the optimal action for a given condition changes as replacement and repair costs change. •Shows how the optimal policy is affected by the system's failure characteristics. •Shows when it is

  2. Automated degenerate PCR primer design for high-throughput sequencing improves efficiency of viral sequencing

    Directory of Open Access Journals (Sweden)

    Li Kelvin

    2012-11-01

    Full Text Available Abstract Background In a high-throughput environment, to PCR amplify and sequence a large set of viral isolates from populations that are potentially heterogeneous and continuously evolving, the use of degenerate PCR primers is an important strategy. Degenerate primers allow for the PCR amplification of a wider range of viral isolates with only one set of pre-mixed primers, thus increasing amplification success rates and minimizing the necessity for genome finishing activities. To successfully select a large set of degenerate PCR primers necessary to tile across an entire viral genome and maximize their success, this process is best performed computationally. Results We have developed a fully automated degenerate PCR primer design system that plays a key role in the J. Craig Venter Institute’s (JCVI high-throughput viral sequencing pipeline. A consensus viral genome, or a set of consensus segment sequences in the case of a segmented virus, is specified using IUPAC ambiguity codes in the consensus template sequence to represent the allelic diversity of the target population. PCR primer pairs are then selected computationally to produce a minimal amplicon set capable of tiling across the full length of the specified target region. As part of the tiling process, primer pairs are computationally screened to meet the criteria for successful PCR with one of two described amplification protocols. The actual sequencing success rates for designed primers for measles virus, mumps virus, human parainfluenza virus 1 and 3, human respiratory syncytial virus A and B and human metapneumovirus are described, where >90% of designed primer pairs were able to consistently successfully amplify >75% of the isolates. Conclusions Augmenting our previously developed and published JCVI Primer Design Pipeline, we achieved similarly high sequencing success rates with only minor software modifications. The recommended methodology for the construction of the consensus

  3. Does mental exertion alter maximal muscle activation?

    Directory of Open Access Journals (Sweden)

    Vianney eRozand

    2014-09-01

    Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.

  4. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  5. Full integrated system of real-time monitoring based on distributed architecture for the high temperature engineering test reactor (HTTR)

    International Nuclear Information System (INIS)

    Subekti, Muhammad; Ohno, Tomio; Kudo, Kazuhiko; Takamatsu, Kuniyoshi; Nabeshima, Kunihiko

    2005-01-01

    A new monitoring system scheme based on distributed architecture for the High Temperature Engineering Test Reactor (HTTR) is proposed to assure consistency of the real-time process of expanded system. A distributed monitoring task on client PCs as an alternative architecture maximizes the throughput and capabilities of the system even if the monitoring tasks suffer a shortage of bandwidth. The prototype of the on-line monitoring system has been developed successfully and will be tested at the actual HTTR site. (author)

  6. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  7. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  8. Activity versus outcome maximization in time management.

    Science.gov (United States)

    Malkoc, Selin A; Tonietto, Gabriela N

    2018-04-30

    Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.

  9. On the maximal superalgebras of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan

    2009-01-01

    In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.

  10. Optimization and high-throughput screening of antimicrobial peptides.

    Science.gov (United States)

    Blondelle, Sylvie E; Lohner, Karl

    2010-01-01

    While a well-established process for lead compound discovery in for-profit companies, high-throughput screening is becoming more popular in basic and applied research settings in academia. The development of combinatorial libraries combined with easy and less expensive access to new technologies have greatly contributed to the implementation of high-throughput screening in academic laboratories. While such techniques were earlier applied to simple assays involving single targets or based on binding affinity, they have now been extended to more complex systems such as whole cell-based assays. In particular, the urgent need for new antimicrobial compounds that would overcome the rapid rise of drug-resistant microorganisms, where multiple target assays or cell-based assays are often required, has forced scientists to focus onto high-throughput technologies. Based on their existence in natural host defense systems and their different mode of action relative to commercial antibiotics, antimicrobial peptides represent a new hope in discovering novel antibiotics against multi-resistant bacteria. The ease of generating peptide libraries in different formats has allowed a rapid adaptation of high-throughput assays to the search for novel antimicrobial peptides. Similarly, the availability nowadays of high-quantity and high-quality antimicrobial peptide data has permitted the development of predictive algorithms to facilitate the optimization process. This review summarizes the various library formats that lead to de novo antimicrobial peptide sequences as well as the latest structural knowledge and optimization processes aimed at improving the peptides selectivity.

  11. A high-throughput urinalysis of abused drugs based on a SPE-LC-MS/MS method coupled with an in-house developed post-analysis data treatment system.

    Science.gov (United States)

    Cheng, Wing-Chi; Yau, Tsan-Sang; Wong, Ming-Kei; Chan, Lai-Ping; Mok, Vincent King-Kuen

    2006-10-16

    A rapid urinalysis system based on SPE-LC-MS/MS with an in-house post-analysis data management system has been developed for the simultaneous identification and semi-quantitation of opiates (morphine, codeine), methadone, amphetamines (amphetamine, methylamphetamine (MA), 3,4-methylenedioxyamphetamine (MDA) and 3,4-methylenedioxymethamphetamine (MDMA)), 11-benzodiazepines or their metabolites and ketamine. The urine samples are subjected to automated solid phase extraction prior to analysis by LC-MS (Finnigan Surveyor LC connected to a Finnigan LCQ Advantage) fitted with an Alltech Rocket Platinum EPS C-18 column. With a single point calibration at the cut-off concentration for each analyte, simultaneous identification and semi-quantitation for the above mentioned drugs can be achieved in a 10 min run per urine sample. A computer macro-program package was developed to automatically retrieve appropriate data from the analytical data files, compare results with preset values (such as cut-off concentrations, MS matching scores) of each drug being analyzed and generate user-defined Excel reports to indicate all positive and negative results in batch-wise manner for ease of checking. The final analytical results are automatically copied into an Access database for report generation purposes. Through the use of automation in sample preparation, simultaneous identification and semi-quantitation by LC-MS/MS and a tailored made post-analysis data management system, this new urinalysis system significantly improves the quality of results, reduces the post-data treatment time, error due to data transfer and is suitable for high-throughput laboratory in batch-wise operation.

  12. Transcriptome profiling and digital gene expression by deep sequencing in early somatic embryogenesis of endangered medicinal Eleutherococcus senticosus Maxim.

    Science.gov (United States)

    Tao, Lei; Zhao, Yue; Wu, Ying; Wang, Qiuyu; Yuan, Hongmei; Zhao, Lijuan; Guo, Wendong; You, Xiangling

    2016-03-01

    Somatic embryogenesis (SE) has been studied as a model system to understand molecular events in physiology, biochemistry, and cytology during plant embryo development. In particular, it is exceedingly difficult to access the morphological and early regulatory events in zygotic embryos. To understand the molecular mechanisms regulating early SE in Eleutherococcus senticosus Maxim., we used high-throughput RNA-Seq technology to investigate its transcriptome. We obtained 58,327,688 reads, which were assembled into 75,803 unique unigenes. To better understand their functions, the unigenes were annotated using the Clusters of Orthologous Groups, Gene Ontology, and Kyoto Encyclopedia of Genes and Genomes databases. Digital gene expression libraries revealed differences in gene expression profiles at different developmental stages (embryogenic callus, yellow embryogenic callus, global embryo). We obtained a sequencing depth of >5.6 million tags per sample and identified many differentially expressed genes at various stages of SE. The initiation of SE affected gene expression in many KEGG pathways, but predominantly that in metabolic pathways, biosynthesis of secondary metabolites, and plant hormone signal transduction. This information on the changes in the multiple pathways related to SE induction in E. senticosus Maxim. embryogenic tissue will contribute to a more comprehensive understanding of the mechanisms involved in early SE. Additionally, the differentially expressed genes may act as molecular markers and could play very important roles in the early stage of SE. The results are a comprehensive molecular biology resource for investigating SE of E. senticosus Maxim. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. High-Throughput Analysis of Enzyme Activities

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Guoxin [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    High-throughput screening (HTS) techniques have been applied to many research fields nowadays. Robot microarray printing technique and automation microtiter handling technique allows HTS performing in both heterogeneous and homogeneous formats, with minimal sample required for each assay element. In this dissertation, new HTS techniques for enzyme activity analysis were developed. First, patterns of immobilized enzyme on nylon screen were detected by multiplexed capillary system. The imaging resolution is limited by the outer diameter of the capillaries. In order to get finer images, capillaries with smaller outer diameters can be used to form the imaging probe. Application of capillary electrophoresis allows separation of the product from the substrate in the reaction mixture, so that the product doesn't have to have different optical properties with the substrate. UV absorption detection allows almost universal detection for organic molecules. Thus, no modifications of either the substrate or the product molecules are necessary. This technique has the potential to be used in screening of local distribution variations of specific bio-molecules in a tissue or in screening of multiple immobilized catalysts. Another high-throughput screening technique is developed by directly monitoring the light intensity of the immobilized-catalyst surface using a scientific charge-coupled device (CCD). Briefly, the surface of enzyme microarray is focused onto a scientific CCD using an objective lens. By carefully choosing the detection wavelength, generation of product on an enzyme spot can be seen by the CCD. Analyzing the light intensity change over time on an enzyme spot can give information of reaction rate. The same microarray can be used for many times. Thus, high-throughput kinetic studies of hundreds of catalytic reactions are made possible. At last, we studied the fluorescence emission spectra of ADP and obtained the detection limits for ADP under three different

  14. A high-throughput robotic sample preparation system and HPLC-MS/MS for measuring urinary anatabine, anabasine, nicotine and major nicotine metabolites.

    Science.gov (United States)

    Wei, Binnian; Feng, June; Rehmani, Imran J; Miller, Sharyn; McGuffey, James E; Blount, Benjamin C; Wang, Lanqing

    2014-09-25

    Most sample preparation methods characteristically involve intensive and repetitive labor, which is inefficient when preparing large numbers of samples from population-scale studies. This study presents a robotic system designed to meet the sampling requirements for large population-scale studies. Using this robotic system, we developed and validated a method to simultaneously measure urinary anatabine, anabasine, nicotine and seven major nicotine metabolites: 4-Hydroxy-4-(3-pyridyl)butanoic acid, cotinine-N-oxide, nicotine-N-oxide, trans-3'-hydroxycotinine, norcotinine, cotinine and nornicotine. We analyzed robotically prepared samples using high-performance liquid chromatography (HPLC) coupled with triple quadrupole mass spectrometry in positive electrospray ionization mode using scheduled multiple reaction monitoring (sMRM) with a total runtime of 8.5 min. The optimized procedure was able to deliver linear analyte responses over a broad range of concentrations. Responses of urine-based calibrators delivered coefficients of determination (R(2)) of >0.995. Sample preparation recovery was generally higher than 80%. The robotic system was able to prepare four 96-well plate (384 urine samples) per day, and the overall method afforded an accuracy range of 92-115%, and an imprecision of labor-saving for sample preparation, making it efficient and practical for routine measurements in large population-scale studies such as the National Health and Nutrition Examination Survey (NHANES) and the Population Assessment of Tobacco and Health (PATH) study. Published by Elsevier B.V.

  15. Quantization with maximally degenerate Poisson brackets: the harmonic oscillator!

    International Nuclear Information System (INIS)

    Nutku, Yavuz

    2003-01-01

    Nambu's construction of multi-linear brackets for super-integrable systems can be thought of as degenerate Poisson brackets with a maximal set of Casimirs in their kernel. By introducing privileged coordinates in phase space these degenerate Poisson brackets are brought to the form of Heisenberg's equations. We propose a definition for constructing quantum operators for classical functions, which enables us to turn the maximally degenerate Poisson brackets into operators. They pose a set of eigenvalue problems for a new state vector. The requirement of the single-valuedness of this eigenfunction leads to quantization. The example of the harmonic oscillator is used to illustrate this general procedure for quantizing a class of maximally super-integrable systems

  16. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  17. A high throughput amenable Arabidopsis-P. aeruginosa system reveals a rewired regulatory module and the utility to identify potent anti-infectives.

    Directory of Open Access Journals (Sweden)

    Suresh Gopalan

    2011-01-01

    Full Text Available We previously demonstrated that in a metasystem consisting of Arabidopsis seedlings growing in liquid medium (in 96 well plates even microbes considered to be innocuous such as laboratory strains of E. coli and B. subtilis can cause potent damage to the host. We further posited that such environment-induced adaptations are brought about by 'system status changes' (rewiring of pre-existing cellular signaling networks and components of the host and the microbe, and that prolongation of such a situation could lead to the emergence of pathogenic states in real-life. Here, using this infection model, we show that the master regulator GacA of the human opportunistic pathogen P. aeruginosa (strain PA14 is dispensable for pathogenesis, as evidenced by three independent read-outs. The gene expression profile of the host after infection with wild type PA14 or the gacA mutant are also identical. GacA normally acts upstream of the quorum sensing regulatory circuit (that includes the regulator LasR that controls a subset of virulence factors. Double mutants in gacA and lasR behave similar to the lasR mutant, as seen by abrogation of a characteristic cell type specific host cell damage caused by PA14 or the gacA mutant. This indicates that a previously unrecognized regulatory mechanism is operative under these conditions upstream of LasR. In addition, the detrimental effect of PA14 on Arabidopsis seedlings is resistant to high concentrations of the aminoglycoside antibiotic gentamicin. These data suggest that the Arabidopsis seedling infection system could be used to identify anti-infectives with potentially novel modes of action.

  18. A high throughput amenable Arabidopsis-P. aeruginosa system reveals a rewired regulatory module and the utility to identify potent anti-infectives.

    Science.gov (United States)

    Gopalan, Suresh; Ausubel, Frederick M

    2011-01-21

    We previously demonstrated that in a metasystem consisting of Arabidopsis seedlings growing in liquid medium (in 96 well plates) even microbes considered to be innocuous such as laboratory strains of E. coli and B. subtilis can cause potent damage to the host. We further posited that such environment-induced adaptations are brought about by 'system status changes' (rewiring of pre-existing cellular signaling networks and components) of the host and the microbe, and that prolongation of such a situation could lead to the emergence of pathogenic states in real-life. Here, using this infection model, we show that the master regulator GacA of the human opportunistic pathogen P. aeruginosa (strain PA14) is dispensable for pathogenesis, as evidenced by three independent read-outs. The gene expression profile of the host after infection with wild type PA14 or the gacA mutant are also identical. GacA normally acts upstream of the quorum sensing regulatory circuit (that includes the regulator LasR) that controls a subset of virulence factors. Double mutants in gacA and lasR behave similar to the lasR mutant, as seen by abrogation of a characteristic cell type specific host cell damage caused by PA14 or the gacA mutant. This indicates that a previously unrecognized regulatory mechanism is operative under these conditions upstream of LasR. In addition, the detrimental effect of PA14 on Arabidopsis seedlings is resistant to high concentrations of the aminoglycoside antibiotic gentamicin. These data suggest that the Arabidopsis seedling infection system could be used to identify anti-infectives with potentially novel modes of action.

  19. Utility maximization and mode of payment

    NARCIS (Netherlands)

    Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.

    2000-01-01

    The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:

  20. Corporate Social Responsibility and Profit Maximizing Behaviour

    OpenAIRE

    Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta

    2005-01-01

    We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.

  1. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Maximal Entanglement in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli

    2017-11-01

    Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.

  3. A method for high throughput bioelectrochemical research based on small scale microbial electrolysis cells

    KAUST Repository

    Call, Douglas F.; Logan, Bruce E.

    2011-01-01

    There is great interest in studying exoelectrogenic microorganisms, but existing methods can require expensive electrochemical equipment and specialized reactors. We developed a simple system for conducting high throughput bioelectrochemical

  4. EMBRYONIC VASCULAR DISRUPTION ADVERSE OUTCOMES: LINKING HIGH THROUGHPUT SIGNALING SIGNATURES WITH FUNCTIONAL CONSEQUENCES

    Science.gov (United States)

    Embryonic vascular disruption is an important adverse outcome pathway (AOP) given the knowledge that chemical disruption of early cardiovascular system development leads to broad prenatal defects. High throughput screening (HTS) assays provide potential building blocks for AOP d...

  5. Local Hamiltonians for maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, P.; Florio, G.; Pascazio, S.; Pepe, F.

    2010-10-01

    We study the conditions for obtaining maximally multipartite-entangled states (MMESs) as nondegenerate eigenstates of Hamiltonians that involve only short-range interactions. We investigate small-size systems (with a number of qubits ranging from 3 to 5) and show some example Hamiltonians with MMESs as eigenstates.

  6. Local Hamiltonians for maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, P.; Florio, G.; Pascazio, S.; Pepe, F.

    2010-01-01

    We study the conditions for obtaining maximally multipartite-entangled states (MMESs) as nondegenerate eigenstates of Hamiltonians that involve only short-range interactions. We investigate small-size systems (with a number of qubits ranging from 3 to 5) and show some example Hamiltonians with MMESs as eigenstates.

  7. Principle of Entropy Maximization for Nonequilibrium Steady States

    DEFF Research Database (Denmark)

    Shapiro, Alexander; Stenby, Erling Halfdan

    2002-01-01

    The goal of this contribution is to find out to what extent the principle of entropy maximization, which serves as a basis for the equilibrium thermodynamics, may be generalized onto non-equilibrium steady states. We prove a theorem that, in the system of thermodynamic coordinates, where entropy...

  8. Transformation of bipartite non-maximally entangled states into a ...

    Indian Academy of Sciences (India)

    We present two schemes for transforming bipartite non-maximally entangled states into a W state in cavity QED system, by using highly detuned interactions and the resonant interactions between two-level atoms and a single-mode cavity field. A tri-atom W state can be generated by adjusting the interaction times between ...

  9. Transformation of bipartite non-maximally entangled states into a ...

    Indian Academy of Sciences (India)

    We present two schemes for transforming bipartite non-maximally entangled states into a W state in cavity QED system, by using highly detuned interactions and the resonant interactions between ... Proceedings of the International Workshop/Conference on Computational Condensed Matter Physics and Materials Science

  10. Adaptive Modulation for a Downlink Multicast Channel in OFDMA Systems

    DEFF Research Database (Denmark)

    Wang, Haibo; Schwefel, Hans-Peter; Toftegaard, Thomas Skjødeberg

    2007-01-01

    In this paper we focus on adaptive modulation strategies for multicast service in orthogonal frequency division multiple access systems. A reward function has been defined as the optimization target, which includes both the average user throughput and bit error rate. We also developed an adaptive...... modulation strategy, namely local best reward strategy, to maximize this reward function. The performance of different modulation strategies are compared in different SNR distribution scenarios, and the optimum strategy in each scenario is suggested....

  11. The metropolitan VoD system based on ethernet/SCM PON

    Science.gov (United States)

    Ji, Wei; Yang, Hongliang; Feng, Dejun; Liu, Yang; Sun, Jande

    2008-11-01

    VoD is a very attractive service which used for entertainment, education and other purposes. In this paper, we present an evolution method that integrates the EPON and SCM-PON by WDM technology to provide high dedicated bandwidth for the metropolitan VoD services. Using DVB, IPTV protocol, unicasting and broadcasting method to maximize the system throughput and by numerical analysis, the hybrid PON system can implement the metropolitan VoD services.

  12. A Data Analysis Pipeline Accounting for Artifacts in Tox21 Quantitative High-Throughput Screening Assays.

    Science.gov (United States)

    Hsieh, Jui-Hua; Sedykh, Alexander; Huang, Ruili; Xia, Menghang; Tice, Raymond R

    2015-08-01

    A main goal of the U.S. Tox21 program is to profile a 10K-compound library for activity against a panel of stress-related and nuclear receptor signaling pathway assays using a quantitative high-throughput screening (qHTS) approach. However, assay artifacts, including nonreproducible signals and assay interference (e.g., autofluorescence), complicate compound activity interpretation. To address these issues, we have developed a data analysis pipeline that includes an updated signal noise-filtering/curation protocol and an assay interference flagging system. To better characterize various types of signals, we adopted a weighted version of the area under the curve (wAUC) to quantify the amount of activity across the tested concentration range in combination with the assay-dependent point-of-departure (POD) concentration. Based on the 32 Tox21 qHTS assays analyzed, we demonstrate that signal profiling using wAUC affords the best reproducibility (Pearson's r = 0.91) in comparison with the POD (0.82) only or the AC(50) (i.e., half-maximal activity concentration, 0.81). Among the activity artifacts characterized, cytotoxicity is the major confounding factor; on average, about 8% of Tox21 compounds are affected, whereas autofluorescence affects less than 0.5%. To facilitate data evaluation, we implemented two graphical user interface applications, allowing users to rapidly evaluate the in vitro activity of Tox21 compounds. © 2015 Society for Laboratory Automation and Screening.

  13. On the throughput of a relay-assisted cognitive radio MIMO channel with space alignment

    KAUST Repository

    Sboui, Lokman; Ghazzai, Hakim; Rezki, Zouheir; Alouini, Mohamed-Slim

    2014-01-01

    We study the achievable rate of a multiple antenna relay-assisted cognitive radio system where a secondary user (SU) aims to communicate instantaneously with the primary user (PU). A special linear precoding scheme is proposed to enable the SU to take advantage of the primary eigenmodes. The used eigenmodes are subject to an interference constraint fixed beforehand by the primary transmitter. Due to the absence of a direct link, both users exploit an amplify-and-forward relay to accomplish their transmissions to a common receiver. After decoding the PU signal, the receiver employs a successive interference cancellation (SIC) to estimate the secondary message. We derive the optimal power allocation that maximizes the achievable rate of the SU respecting interference, peak and relay power constraints. Furthermore, we analyze the SIC detection accuracy on the PU throughput. Numerical results highlight the cognitive rate gain achieved by our proposed scheme without harming the primary rate. In addition, we show that the relay has an important role in increasing or decreasing PU and SU rates especially when varying its power and/or its amplifying gain. © 2014 IFIP.

  14. On the throughput of a relay-assisted cognitive radio MIMO channel with space alignment

    KAUST Repository

    Sboui, Lokman

    2014-05-01

    We study the achievable rate of a multiple antenna relay-assisted cognitive radio system where a secondary user (SU) aims to communicate instantaneously with the primary user (PU). A special linear precoding scheme is proposed to enable the SU to take advantage of the primary eigenmodes. The used eigenmodes are subject to an interference constraint fixed beforehand by the primary transmitter. Due to the absence of a direct link, both users exploit an amplify-and-forward relay to accomplish their transmissions to a common receiver. After decoding the PU signal, the receiver employs a successive interference cancellation (SIC) to estimate the secondary message. We derive the optimal power allocation that maximizes the achievable rate of the SU respecting interference, peak and relay power constraints. Furthermore, we analyze the SIC detection accuracy on the PU throughput. Numerical results highlight the cognitive rate gain achieved by our proposed scheme without harming the primary rate. In addition, we show that the relay has an important role in increasing or decreasing PU and SU rates especially when varying its power and/or its amplifying gain. © 2014 IFIP.

  15. Novel approach to high-throughput determination of endocrine disruptors using recycled diatomaceous earth as a green sorbent phase for thin-film solid-phase microextraction combined with 96-well plate system.

    Science.gov (United States)

    Kirschner, Nicolas; Dias, Adriana Neves; Budziak, Dilma; da Silveira, Cristian Berto; Merib, Josias; Carasek, Eduardo

    2017-12-15

    A sustainable approach to TF-SPME is presented using recycled diatomaceous earth, obtained from a beer purification process, as a green sorbent phase for the determination of bisphenol A (BPA), benzophenone (BzP), triclocarban (TCC), 4-methylbenzylidene camphor (4-MBC) and 2-ethylhexyl-p-methoxycinnamate (EHMC) in environmental water samples. TF-SPME was combined with a 96-well plate system allowing for high-throughput analysis due to the simultaneous extraction/desorption up to 96 samples. The proposed sorbent phase exhibited good stability in organic solvents, as well as satisfactory analytical performance. The optimized method consisted of 240 min of extraction at pH 6 with the addition of NaCl (15% w/v). A mixture of MeOH:ACN (50:50 v/v) was used for the desorption the analytes, using a time of 30 min. Limits of detection varied from 1 μg L -1 for BzP and TCC to 8 μg L -1 for the other analytes, and R 2 ranged from 0.9926 for 4-MBC to 0.9988 for BPA. This novel and straightforward approach offers an environmentally-friendly and very promising alternative for routine analysis. . The total sample preparation time per sample was approximately 2.8 min, which is a significant advantage when a large number of analytical run is required. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Maximally-localized position, Euclidean path-integral, and thermodynamics in GUP quantum mechanics

    Science.gov (United States)

    Bernardo, Reginald Christian S.; Esguerra, Jose Perico H.

    2018-04-01

    In dealing with quantum mechanics at very high energies, it is essential to adapt to a quasiposition representation using the maximally-localized states because of the generalized uncertainty principle. In this paper, we look at maximally-localized states as eigenstates of the operator ξ = X + iβP that we refer to as the maximally-localized position. We calculate the overlap between maximally-localized states and show that the identity operator can be expressed in terms of the maximally-localized states. Furthermore, we show that the maximally-localized position is diagonal in momentum-space and that the maximally-localized position and its adjoint satisfy commutation and anti-commutation relations reminiscent of the harmonic oscillator commutation and anti-commutation relations. As application, we use the maximally-localized position in developing the Euclidean path-integral and introduce the compact form of the propagator for maximal localization. The free particle momentum-space propagator and the propagator for maximal localization are analytically evaluated up to quadratic-order in β. Finally, we obtain a path-integral expression for the partition function of a thermodynamic system using the maximally-localized states. The partition function of a gas of noninteracting particles is evaluated. At temperatures exceeding the Planck energy, we obtain the gas' maximum internal energy N / 2 β and recover the zero heat capacity of an ideal gas.

  17. HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL

    CERN Document Server

    HR Division

    2000-01-01

    Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...

  18. Maximal Inequalities for Dependent Random Variables

    DEFF Research Database (Denmark)

    Hoffmann-Jorgensen, Jorgen

    2016-01-01

    Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...

  19. Maximizing Function through Intelligent Robot Actuator Control

    Data.gov (United States)

    National Aeronautics and Space Administration — Maximizing Function through Intelligent Robot Actuator Control Successful missions to Mars and beyond will only be possible with the support of high-performance...

  20. An ethical justification of profit maximization

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2010-01-01

    In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....

  1. A definition of maximal CP-violation

    International Nuclear Information System (INIS)

    Roos, M.

    1985-01-01

    The unitary matrix of quark flavour mixing is parametrized in a general way, permitting a mathematically natural definition of maximal CP violation. Present data turn out to violate this definition by 2-3 standard deviations. (orig.)

  2. A cosmological problem for maximally symmetric supergravity

    International Nuclear Information System (INIS)

    German, G.; Ross, G.G.

    1986-01-01

    Under very general considerations it is shown that inflationary models of the universe based on maximally symmetric supergravity with flat potentials are unable to resolve the cosmological energy density (Polonyi) problem. (orig.)

  3. Insulin resistance and maximal oxygen uptake

    DEFF Research Database (Denmark)

    Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans

    2003-01-01

    BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...

  4. Maximal supergravities and the E10 model

    International Nuclear Information System (INIS)

    Kleinschmidt, Axel; Nicolai, Hermann

    2006-01-01

    The maximal rank hyperbolic Kac-Moody algebra e 10 has been conjectured to play a prominent role in the unification of duality symmetries in string and M theory. We review some recent developments supporting this conjecture

  5. High-throughput screening of carbohydrate-degrading enzymes using novel insoluble chromogenic substrate assay kits

    DEFF Research Database (Denmark)

    Schückel, Julia; Kracun, Stjepan Kresimir; Willats, William George Tycho

    2016-01-01

    for this is that advances in genome and transcriptome sequencing, together with associated bioinformatics tools allow for rapid identification of candidate CAZymes, but technology for determining an enzyme's biochemical characteristics has advanced more slowly. To address this technology gap, a novel high-throughput assay...... CPH and ICB substrates are provided in a 96-well high-throughput assay system. The CPH substrates can be made in four different colors, enabling them to be mixed together and thus increasing assay throughput. The protocol describes a 96-well plate assay and illustrates how this assay can be used...... for screening the activities of enzymes, enzyme cocktails, and broths....

  6. Maximizing your return on people.

    Science.gov (United States)

    Bassi, Laurie; McMurrer, Daniel

    2007-03-01

    Though most traditional HR performance metrics don't predict organizational performance, alternatives simply have not existed--until now. During the past ten years, researchers Laurie Bassi and Daniel McMurrer have worked to develop a system that allows executives to assess human capital management (HCM) and to use those metrics both to predict organizational performance and to guide organizations' investments in people. The new framework is based on a core set of HCM drivers that fall into five major categories: leadership practices, employee engagement, knowledge accessibility, workforce optimization, and organizational learning capacity. By employing rigorously designed surveys to score a company on the range of HCM practices across the five categories, it's possible to benchmark organizational HCM capabilities, identify HCM strengths and weaknesses, and link improvements or back-sliding in specific HCM practices with improvements or shortcomings in organizational performance. The process requires determining a "maturity" score for each practice, based on a scale of 1 (low) to 5 (high). Over time, evolving maturity scores from multiple surveys can reveal progress in each of the HCM practices and help a company decide where to focus improvement efforts that will have a direct impact on performance. The authors draw from their work with American Standard, South Carolina's Beaufort County School District, and a bevy of financial firms to show how improving HCM scores led to increased sales, safety, academic test scores, and stock returns. Bassi and McMurrer urge HR departments to move beyond the usual metrics and begin using HCM measurement tools to gauge how well people are managed and developed throughout the organization. In this new role, according to the authors, HR can take on strategic responsibility and ensure that superior human capital management becomes central to the organization's culture.

  7. Optimal topologies for maximizing network transmission capacity

    Science.gov (United States)

    Chen, Zhenhao; Wu, Jiajing; Rong, Zhihai; Tse, Chi K.

    2018-04-01

    It has been widely demonstrated that the structure of a network is a major factor that affects its traffic dynamics. In this work, we try to identify the optimal topologies for maximizing the network transmission capacity, as well as to build a clear relationship between structural features of a network and the transmission performance in terms of traffic delivery. We propose an approach for designing optimal network topologies against traffic congestion by link rewiring and apply them on the Barabási-Albert scale-free, static scale-free and Internet Autonomous System-level networks. Furthermore, we analyze the optimized networks using complex network parameters that characterize the structure of networks, and our simulation results suggest that an optimal network for traffic transmission is more likely to have a core-periphery structure. However, assortative mixing and the rich-club phenomenon may have negative impacts on network performance. Based on the observations of the optimized networks, we propose an efficient method to improve the transmission capacity of large-scale networks.

  8. Neutrino mass textures with maximal CP violation

    International Nuclear Information System (INIS)

    Aizawa, Ichiro; Kitabayashi, Teruyuki; Yasue, Masaki

    2005-01-01

    We show three types of neutrino mass textures, which give maximal CP violation as well as maximal atmospheric neutrino mixing. These textures are described by six real mass parameters: one specified by two complex flavor neutrino masses and two constrained ones and the others specified by three complex flavor neutrino masses. In each texture, we calculate mixing angles and masses, which are consistent with observed data, as well as Majorana CP phases

  9. Why firms should not always maximize profits

    OpenAIRE

    Kolstad, Ivar

    2006-01-01

    Though corporate social responsibility (CSR) is on the agenda of most major corporations, corporate executives still largely support the view that corporations should maximize the returns to their owners. There are two lines of defence for this position. One is the Friedmanian view that maximizing owner returns is the corporate social responsibility of corporations. The other is a position voiced by many executives, that CSR and profits go together. This paper argues that the first position i...

  10. Maximally Informative Observables and Categorical Perception

    OpenAIRE

    Tsiang, Elaine

    2012-01-01

    We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...

  11. Applications of expectation maximization algorithm for coherent optical communication

    DEFF Research Database (Denmark)

    Carvalho, L.; Oliveira, J.; Zibar, Darko

    2014-01-01

    In this invited paper, we present powerful statistical signal processing methods, used by machine learning community, and link them to current problems in optical communication. In particular, we will look into iterative maximum likelihood parameter estimation based on expectation maximization...... algorithm and its application in coherent optical communication systems for linear and nonlinear impairment mitigation. Furthermore, the estimated parameters are used to build the probabilistic model of the system for the synthetic impairment generation....

  12. Shareholder, stakeholder-owner or broad stakeholder maximization

    OpenAIRE

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...

  13. A CRISPR CASe for High-Throughput Silencing

    Directory of Open Access Journals (Sweden)

    Jacob eHeintze

    2013-10-01

    Full Text Available Manipulation of gene expression on a genome-wide level is one of the most important systematic tools in the post-genome era. Such manipulations have largely been enabled by expression cloning approaches using sequence-verified cDNA libraries, large-scale RNA interference libraries (shRNA or siRNA and zinc finger nuclease technologies. More recently, the CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated (Cas9-mediated gene editing technology has been described that holds great promise for future use of this technology in genomic manipulation. It was suggested that the CRISPR system has the potential to be used in high-throughput, large-scale loss of function screening. Here we discuss some of the challenges in engineering of CRISPR/Cas genomic libraries and some of the aspects that need to be addressed in order to use this technology on a high-throughput scale.

  14. MONO: A program to calculate synchrotron beamline monochromator throughputs

    International Nuclear Information System (INIS)

    Chapman, D.

    1989-01-01

    A set of Fortran programs have been developed to calculate the expected throughput of x-ray monochromators with a filtered synchrotron source and is applicable to bending magnet and wiggler beamlines. These programs calculate the normalized throughput and filtered synchrotron spectrum passed by multiple element, flat un- focussed monochromator crystals of the Bragg or Laue type as a function of incident beam divergence, energy and polarization. The reflected and transmitted beam of each crystal is calculated using the dynamical theory of diffraction. Multiple crystal arrangements in the dispersive and non-dispersive mode are allowed as well as crystal asymmetry and energy or angle offsets. Filters or windows of arbitrary elemental composition may be used to filter the incident synchrotron beam. This program should be useful to predict the intensities available from many beamline configurations as well as assist in the design of new monochromator and analyzer systems. 6 refs., 3 figs

  15. Reverse Phase Protein Arrays for High-throughput Toxicity Screening

    DEFF Research Database (Denmark)

    Pedersen, Marlene Lemvig; Block, Ines; List, Markus

    High-throughput screening is extensively applied for identification of drug targets and drug discovery and recently it found entry into toxicity testing. Reverse phase protein arrays (RPPAs) are used widespread for quantification of protein markers. We reasoned that RPPAs also can be utilized...... beneficially in automated high-throughput toxicity testing. An advantage of using RPPAs is that, in addition to the baseline toxicity readout, they allow testing of multiple markers of toxicity, such as inflammatory responses, which do not necessarily cumulate in cell death. We used transfection of si......RNAs with known killing effects as a model system to demonstrate that RPPA-based protein quantification can serve as substitute readout of cell viability, hereby reliably reflecting toxicity. In terms of automation, cell exposure, protein harvest, serial dilution and sample reformatting were performed using...

  16. Throughput centered prioritization of machines in transfer lines

    International Nuclear Information System (INIS)

    Pascual, R.; Godoy, D.; Louit, D.M.

    2011-01-01

    In an environment of scarce resources and complex production systems, prioritizing is key to confront the challenge of managing physical assets. In the literature, there exist a number of techniques to prioritize maintenance decisions that consider safety, technical and business perspectives. However, the effect of risk mitigating elements-such as intermediate buffers in production lines-on prioritization has not yet been investigated in depth. In this line, the work proposes a user-friendly graphical technique called the system efficiency influence diagram (SEID). Asset managers may use SEID to identify machines that have a greater impact on the system throughput, and thus set prioritized maintenance policies and/or redesign of buffers capacities. The tool provides insight to the analyst as it decomposes the influence of a given machine on the system throughput as a product of two elements: (1) system influence efficiency factor and (2) machine unavailability factor. We illustrate its applicability using three case studies: a four-machine transfer line, a vehicle assembly line, and an open-pit mining conveyor system. The results confirm that the machines with greater unavailability factors are not necessarily the most important for the efficiency of the production line, as it is the case when no intermediate buffers exist. As a decision aid tool, SEID emphasizes the need to move from a maintenance vision focused on machine availability, to a systems engineering perspective. - Highlights: → We propose a graphical technique to prioritize machines in production lines. → The tool is called 'system efficiency influence diagram' (SEID). → It helps setting prioritized maintenance policies and/or redesign of buffers. → The SEID technique focuses on system efficiency and throughput. → We illustrate its applicability using three case studies.

  17. Throughput centered prioritization of machines in transfer lines

    Energy Technology Data Exchange (ETDEWEB)

    Pascual, R., E-mail: rpascual@ing.puc.cl [Physical Asset Management Lab, Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Godoy, D. [Physical Asset Management Lab, Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Louit, D.M. [Komatsu Chile S.A., Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)

    2011-10-15

    In an environment of scarce resources and complex production systems, prioritizing is key to confront the challenge of managing physical assets. In the literature, there exist a number of techniques to prioritize maintenance decisions that consider safety, technical and business perspectives. However, the effect of risk mitigating elements-such as intermediate buffers in production lines-on prioritization has not yet been investigated in depth. In this line, the work proposes a user-friendly graphical technique called the system efficiency influence diagram (SEID). Asset managers may use SEID to identify machines that have a greater impact on the system throughput, and thus set prioritized maintenance policies and/or redesign of buffers capacities. The tool provides insight to the analyst as it decomposes the influence of a given machine on the system throughput as a product of two elements: (1) system influence efficiency factor and (2) machine unavailability factor. We illustrate its applicability using three case studies: a four-machine transfer line, a vehicle assembly line, and an open-pit mining conveyor system. The results confirm that the machines with greater unavailability factors are not necessarily the most important for the efficiency of the production line, as it is the case when no intermediate buffers exist. As a decision aid tool, SEID emphasizes the need to move from a maintenance vision focused on machine availability, to a systems engineering perspective. - Highlights: > We propose a graphical technique to prioritize machines in production lines. > The tool is called 'system efficiency influence diagram' (SEID). > It helps setting prioritized maintenance policies and/or redesign of buffers. > The SEID technique focuses on system efficiency and throughput. > We illustrate its applicability using three case studies.

  18. AOPs and Biomarkers: Bridging High Throughput Screening ...

    Science.gov (United States)

    As high throughput screening (HTS) plays a larger role in toxicity testing, camputational toxicology has emerged as a critical component in interpreting the large volume of data produced. Computational models designed to quantify potential adverse effects based on HTS data will benefit from additional data sources that connect the magnitude of perturbation from the in vitro system to a level of concern at the organism or population level. The adverse outcome pathway (AOP) concept provides an ideal framework for combining these complementary data. Recent international efforts under the auspices of the Organization for Economic Co-operation and Development (OECD) have resulted in an AOP wiki designed to house formal descriptions of AOPs suitable for use in regulatory decision making. Recent efforts have built upon this to include an ontology describing the AOP with linkages to biological pathways, physiological terminology, and taxonomic applicability domains. Incorporation of an AOP network tool developed by the U.S. Army Corps of Engineers also allows consideration of cumulative risk from chemical and non-chemical stressors. Biomarkers are an important complement to formal AOP descriptions, particularly when dealing with susceptible subpopulations or lifestages in human health risk assessment. To address the issue of nonchemical stressors than may modify effects of criteria air pollutants, a novel method was used to integrate blood gene expression data with hema

  19. Vacua of maximal gauged D=3 supergravities

    International Nuclear Information System (INIS)

    Fischbacher, T; Nicolai, H; Samtleben, H

    2002-01-01

    We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories

  20. Dynamical generation of maximally entangled states in two identical cavities

    International Nuclear Information System (INIS)

    Alexanian, Moorad

    2011-01-01

    The generation of entanglement between two identical coupled cavities, each containing a single three-level atom, is studied when the cavities exchange two coherent photons and are in the N=2,4 manifolds, where N represents the maximum number of photons possible in either cavity. The atom-photon state of each cavity is described by a qutrit for N=2 and a five-dimensional qudit for N=4. However, the conservation of the total value of N for the interacting two-cavity system limits the total number of states to only 4 states for N=2 and 8 states for N=4, rather than the usual 9 for two qutrits and 25 for two five-dimensional qudits. In the N=2 manifold, two-qutrit states dynamically generate four maximally entangled Bell states from initially unentangled states. In the N=4 manifold, two-qudit states dynamically generate maximally entangled states involving three or four states. The generation of these maximally entangled states occurs rather rapidly for large hopping strengths. The cavities function as a storage of periodically generated maximally entangled states.

  1. Coded throughput performance simulations for the time-varying satellite channel. M.S. Thesis

    Science.gov (United States)

    Han, LI

    1995-01-01

    The design of a reliable satellite communication link involving the data transfer from a small, low-orbit satellite to a ground station, but through a geostationary satellite, was examined. In such a scenario, the received signal power to noise density ratio increases as the transmitting low-orbit satellite comes into view, and then decreases as it then departs, resulting in a short-duration, time-varying communication link. The optimal values of the small satellite antenna beamwidth, signaling rate, modulation scheme and the theoretical link throughput (in bits per day) have been determined. The goal of this thesis is to choose a practical coding scheme which maximizes the daily link throughput while satisfying a prescribed probability of error requirement. We examine the throughput of both fixed rate and variable rate concatenated forward error correction (FEC) coding schemes for the additive white Gaussian noise (AWGN) channel, and then examine the effect of radio frequency interference (RFI) on the best coding scheme among them. Interleaving is used to mitigate degradation due to RFI. It was found that the variable rate concatenated coding scheme could achieve 74 percent of the theoretical throughput, equivalent to 1.11 Gbits/day based on the cutoff rate R(sub 0). For comparison, 87 percent is achievable for AWGN-only case.

  2. An information maximization model of eye movements

    Science.gov (United States)

    Renninger, Laura Walker; Coughlan, James; Verghese, Preeti; Malik, Jitendra

    2005-01-01

    We propose a sequential information maximization model as a general strategy for programming eye movements. The model reconstructs high-resolution visual information from a sequence of fixations, taking into account the fall-off in resolution from the fovea to the periphery. From this framework we get a simple rule for predicting fixation sequences: after each fixation, fixate next at the location that minimizes uncertainty (maximizes information) about the stimulus. By comparing our model performance to human eye movement data and to predictions from a saliency and random model, we demonstrate that our model is best at predicting fixation locations. Modeling additional biological constraints will improve the prediction of fixation sequences. Our results suggest that information maximization is a useful principle for programming eye movements.

  3. High throughput sample processing and automated scoring

    Directory of Open Access Journals (Sweden)

    Gunnar eBrunborg

    2014-10-01

    Full Text Available The comet assay is a sensitive and versatile method for assessing DNA damage in cells. In the traditional version of the assay, there are many manual steps involved and few samples can be treated in one experiment. High throughput modifications have been developed during recent years, and they are reviewed and discussed. These modifications include accelerated scoring of comets; other important elements that have been studied and adapted to high throughput are cultivation and manipulation of cells or tissues before and after exposure, and freezing of treated samples until comet analysis and scoring. High throughput methods save time and money but they are useful also for other reasons: large-scale experiments may be performed which are otherwise not practicable (e.g., analysis of many organs from exposed animals, and human biomonitoring studies, and automation gives more uniform sample treatment and less dependence on operator performance. The high throughput modifications now available vary largely in their versatility, capacity, complexity and costs. The bottleneck for further increase of throughput appears to be the scoring.

  4. Maximizing band gaps in plate structures

    DEFF Research Database (Denmark)

    Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard

    2006-01-01

    periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated......Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... theoretically and experimentally and the issue of finite size effects is addressed....

  5. Singularity Structure of Maximally Supersymmetric Scattering Amplitudes

    DEFF Research Database (Denmark)

    Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy

    2014-01-01

    We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....

  6. Learning curves for mutual information maximization

    International Nuclear Information System (INIS)

    Urbanczik, R.

    2003-01-01

    An unsupervised learning procedure based on maximizing the mutual information between the outputs of two networks receiving different but statistically dependent inputs is analyzed [S. Becker and G. Hinton, Nature (London) 355, 161 (1992)]. For a generic data model, I show that in the large sample limit the structure in the data is recognized by mutual information maximization. For a more restricted model, where the networks are similar to perceptrons, I calculate the learning curves for zero-temperature Gibbs learning. These show that convergence can be rather slow, and a way of regularizing the procedure is considered

  7. Finding Maximal Pairs with Bounded Gap

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Lyngsø, Rune B.; Pedersen, Christian N. S.

    1999-01-01

    . In this paper we present methods for finding all maximal pairs under various constraints on the gap. In a string of length n we can find all maximal pairs with gap in an upper and lower bounded interval in time O(n log n+z) where z is the number of reported pairs. If the upper bound is removed the time reduces...... to O(n+z). Since a tandem repeat is a pair where the gap is zero, our methods can be seen as a generalization of finding tandem repeats. The running time of our methods equals the running time of well known methods for finding tandem repeats....

  8. Efficient Conservation in a Utility-Maximization Framework

    Directory of Open Access Journals (Sweden)

    Frank W. Davis

    2006-06-01

    Full Text Available Systematic planning for biodiversity conservation is being conducted at scales ranging from global to national to regional. The prevailing planning paradigm is to identify the minimum land allocations needed to reach specified conservation targets or maximize the amount of conservation accomplished under an area or budget constraint. We propose a more general formulation for setting conservation priorities that involves goal setting, assessing the current conservation system, developing a scenario of future biodiversity given the current conservation system, and allocating available conservation funds to alter that scenario so as to maximize future biodiversity. Under this new formulation for setting conservation priorities, the value of a site depends on resource quality, threats to resource quality, and costs. This planning approach is designed to support collaborative processes and negotiation among competing interest groups. We demonstrate these ideas with a case study of the Sierra Nevada bioregion of California.

  9. Planning for partnerships: Maximizing surge capacity resources through service learning.

    Science.gov (United States)

    Adams, Lavonne M; Reams, Paula K; Canclini, Sharon B

    2015-01-01

    Infectious disease outbreaks and natural or human-caused disasters can strain the community's surge capacity through sudden demand on healthcare activities. Collaborative partnerships between communities and schools of nursing have the potential to maximize resource availability to meet community needs following a disaster. This article explores how communities can work with schools of nursing to enhance surge capacity through systems thinking, integrated planning, and cooperative efforts.

  10. Maximizing the Range of a Projectile.

    Science.gov (United States)

    Brown, Ronald A.

    1992-01-01

    Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)

  11. Robust Utility Maximization Under Convex Portfolio Constraints

    International Nuclear Information System (INIS)

    Matoussi, Anis; Mezghani, Hanen; Mnif, Mohamed

    2015-01-01

    We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle

  12. Ehrenfest's Lottery--Time and Entropy Maximization

    Science.gov (United States)

    Ashbaugh, Henry S.

    2010-01-01

    Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…

  13. Reserve design to maximize species persistence

    Science.gov (United States)

    Robert G. Haight; Laurel E. Travis

    2008-01-01

    We develop a reserve design strategy to maximize the probability of species persistence predicted by a stochastic, individual-based, metapopulation model. Because the population model does not fit exact optimization procedures, our strategy involves deriving promising solutions from theory, obtaining promising solutions from a simulation optimization heuristic, and...

  14. Maximal indecomposable past sets and event horizons

    International Nuclear Information System (INIS)

    Krolak, A.

    1984-01-01

    The existence of maximal indecomposable past sets MIPs is demonstrated using the Kuratowski-Zorn lemma. A criterion for the existence of an absolute event horizon in space-time is given in terms of MIPs and a relation to black hole event horizon is shown. (author)

  15. Maximization of eigenvalues using topology optimization

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2000-01-01

    to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...

  16. Maximizing scientific knowledge from randomized clinical trials

    DEFF Research Database (Denmark)

    Gustafsson, Finn; Atar, Dan; Pitt, Bertram

    2010-01-01

    Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly vari...

  17. A Model of College Tuition Maximization

    Science.gov (United States)

    Bosshardt, Donald I.; Lichtenstein, Larry; Zaporowski, Mark P.

    2009-01-01

    This paper develops a series of models for optimal tuition pricing for private colleges and universities. The university is assumed to be a profit maximizing, price discriminating monopolist. The enrollment decision of student's is stochastic in nature. The university offers an effective tuition rate, comprised of stipulated tuition less financial…

  18. High throughput nanoimprint lithography for semiconductor memory applications

    Science.gov (United States)

    Ye, Zhengmao; Zhang, Wei; Khusnatdinov, Niyaz; Stachowiak, Tim; Irving, J. W.; Longsine, Whitney; Traub, Matthew; Fletcher, Brian; Liu, Weijun

    2017-03-01

    Imprint lithography is a promising technology for replication of nano-scale features. For semiconductor device applications, Canon deposits a low viscosity resist on a field by field basis using jetting technology. A patterned mask is lowered into the resist fluid which then quickly flows into the relief patterns in the mask by capillary action. Following this filling step, the resist is crosslinked under UV radiation, and then the mask is removed, leaving a patterned resist on the substrate. There are two critical components to meeting throughput requirements for imprint lithography. Using a similar approach to what is already done for many deposition and etch processes, imprint stations can be clustered to enhance throughput. The FPA-1200NZ2C is a four station cluster system designed for high volume manufacturing. For a single station, throughput includes overhead, resist dispense, resist fill time (or spread time), exposure and separation. Resist exposure time and mask/wafer separation are well understood processing steps with typical durations on the order of 0.10 to 0.20 seconds. To achieve a total process throughput of 17 wafers per hour (wph) for a single station, it is necessary to complete the fluid fill step in 1.2 seconds. For a throughput of 20 wph, fill time must be reduced to only one 1.1 seconds. There are several parameters that can impact resist filling. Key parameters include resist drop volume (smaller is better), system controls (which address drop spreading after jetting), Design for Imprint or DFI (to accelerate drop spreading) and material engineering (to promote wetting between the resist and underlying adhesion layer). In addition, it is mandatory to maintain fast filling, even for edge field imprinting. In this paper, we address the improvements made in all of these parameters to first enable a 1.20 second filling process for a device like pattern and have demonstrated this capability for both full fields and edge fields. Non

  19. Understanding Violations of Gricean Maxims in Preschoolers and Adults

    Directory of Open Access Journals (Sweden)

    Mako eOkanda

    2015-07-01

    Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.

  20. Coding for Parallel Links to Maximize the Expected Value of Decodable Messages

    Science.gov (United States)

    Klimesh, Matthew A.; Chang, Christopher S.

    2011-01-01

    When multiple parallel communication links are available, it is useful to consider link-utilization strategies that provide tradeoffs between reliability and throughput. Interesting cases arise when there are three or more available links. Under the model considered, the links have known probabilities of being in working order, and each link has a known capacity. The sender has a number of messages to send to the receiver. Each message has a size and a value (i.e., a worth or priority). Messages may be divided into pieces arbitrarily, and the value of each piece is proportional to its size. The goal is to choose combinations of messages to send on the links so that the expected value of the messages decodable by the receiver is maximized. There are three parts to the innovation: (1) Applying coding to parallel links under the model; (2) Linear programming formulation for finding the optimal combinations of messages to send on the links; and (3) Algorithms for assisting in finding feasible combinations of messages, as support for the linear programming formulation. There are similarities between this innovation and methods developed in the field of network coding. However, network coding has generally been concerned with either maximizing throughput in a fixed network, or robust communication of a fixed volume of data. In contrast, under this model, the throughput is expected to vary depending on the state of the network. Examples of error-correcting codes that are useful under this model but which are not needed under previous models have been found. This model can represent either a one-shot communication attempt, or a stream of communications. Under the one-shot model, message sizes and link capacities are quantities of information (e.g., measured in bits), while under the communications stream model, message sizes and link capacities are information rates (e.g., measured in bits/second). This work has the potential to increase the value of data returned from

  1. Ultraspecific probes for high throughput HLA typing

    Directory of Open Access Journals (Sweden)

    Eggers Rick

    2009-02-01

    Full Text Available Abstract Background The variations within an individual's HLA (Human Leukocyte Antigen genes have been linked to many immunological events, e.g. susceptibility to disease, response to vaccines, and the success of blood, tissue, and organ transplants. Although the microarray format has the potential to achieve high-resolution typing, this has yet to be attained due to inefficiencies of current probe design strategies. Results We present a novel three-step approach for the design of high-throughput microarray assays for HLA typing. This approach first selects sequences containing the SNPs present in all alleles of the locus of interest and next calculates the number of base changes necessary to convert a candidate probe sequences to the closest subsequence within the set of sequences that are likely to be present in the sample including the remainder of the human genome in order to identify those candidate probes which are "ultraspecific" for the allele of interest. Due to the high specificity of these sequences, it is possible that preliminary steps such as PCR amplification are no longer necessary. Lastly, the minimum number of these ultraspecific probes is selected such that the highest resolution typing can be achieved for the minimal cost of production. As an example, an array was designed and in silico results were obtained for typing of the HLA-B locus. Conclusion The assay presented here provides a higher resolution than has previously been developed and includes more alleles than previously considered. Based upon the in silico and preliminary experimental results, we believe that the proposed approach can be readily applied to any highly polymorphic gene system.

  2. Controlling high-throughput manufacturing at the nano-scale

    Science.gov (United States)

    Cooper, Khershed P.

    2013-09-01

    Interest in nano-scale manufacturing research and development is growing. The reason is to accelerate the translation of discoveries and inventions of nanoscience and nanotechnology into products that would benefit industry, economy and society. Ongoing research in nanomanufacturing is focused primarily on developing novel nanofabrication techniques for a variety of applications—materials, energy, electronics, photonics, biomedical, etc. Our goal is to foster the development of high-throughput methods of fabricating nano-enabled products. Large-area parallel processing and highspeed continuous processing are high-throughput means for mass production. An example of large-area processing is step-and-repeat nanoimprinting, by which nanostructures are reproduced again and again over a large area, such as a 12 in wafer. Roll-to-roll processing is an example of continuous processing, by which it is possible to print and imprint multi-level nanostructures and nanodevices on a moving flexible substrate. The big pay-off is high-volume production and low unit cost. However, the anticipated cost benefits can only be realized if the increased production rate is accompanied by high yields of high quality products. To ensure product quality, we need to design and construct manufacturing systems such that the processes can be closely monitored and controlled. One approach is to bring cyber-physical systems (CPS) concepts to nanomanufacturing. CPS involves the control of a physical system such as manufacturing through modeling, computation, communication and control. Such a closely coupled system will involve in-situ metrology and closed-loop control of the physical processes guided by physics-based models and driven by appropriate instrumentation, sensing and actuation. This paper will discuss these ideas in the context of controlling high-throughput manufacturing at the nano-scale.

  3. Throughput assurance of wireless body area networks coexistence based on stochastic geometry.

    Directory of Open Access Journals (Sweden)

    Ruixia Liu

    Full Text Available Wireless body area networks (WBANs are expected to influence the traditional medical model by assisting caretakers with health telemonitoring. Within WBANs, the transmit power of the nodes should be as small as possible owing to their limited energy capacity but should be sufficiently large to guarantee the quality of the signal at the receiving nodes. When multiple WBANs coexist in a small area, the communication reliability and overall throughput can be seriously affected due to resource competition and interference. We show that the total network throughput largely depends on the WBANs distribution density (λp, transmit power of their nodes (Pt, and their carrier-sensing threshold (γ. Using stochastic geometry, a joint carrier-sensing threshold and power control strategy is proposed to meet the demand of coexisting WBANs based on the IEEE 802.15.4 standard. Given different network distributions and carrier-sensing thresholds, the proposed strategy derives a minimum transmit power according to varying surrounding environment. We obtain expressions for transmission success probability and throughput adopting this strategy. Using numerical examples, we show that joint carrier-sensing thresholds and transmit power strategy can effectively improve the overall system throughput and reduce interference. Additionally, this paper studies the effects of a guard zone on the throughput using a Matern hard-core point process (HCPP type II model. Theoretical analysis and simulation results show that the HCPP model can increase the success probability and throughput of networks.

  4. Generation and evaluation of mammalian secreted and membrane protein expression libraries for high-throughput target discovery.

    Science.gov (United States)

    Panavas, Tadas; Lu, Jin; Liu, Xuesong; Winkis, Ann-Marie; Powers, Gordon; Naso, Michael F; Amegadzie, Bernard

    2011-09-01

    Expressed protein libraries are becoming a critical tool for new target discovery in the pharmaceutical industry. In order to get the most meaningful and comprehensive results from protein library screens, it is essential to have library proteins in their native conformation with proper post-translation modifications. This goal is achieved by expressing untagged human proteins in a human cell background. We optimized the transfection and cell culture conditions to maximize protein expression in a 96-well format so that the expression levels were comparable with the levels observed in shake flasks. For detection purposes, we engineered a 'tag after stop codon' system. Depending on the expression conditions, it was possible to express either native or tagged proteins from the same expression vector set. We created a human secretion protein library of 1432 candidates and a small plasma membrane protein set of about 500 candidates. Utilizing the optimized expression conditions, we expressed and analyzed both libraries by SDS-PAGE gel electrophoresis and Western blotting. Two thirds of secreted proteins could be detected by Western-blot analyses; almost half of them were visible on Coomassie stained gels. In this paper, we describe protein expression libraries that can be easily produced in mammalian expression systems in a 96-well format, with one protein expressed per well. The libraries and methods described allow for the development of robust, high-throughput functional screens designed to assay for protein specific functions associated with a relevant disease-specific activity. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. High Throughput Analysis of Photocatalytic Water Purification

    NARCIS (Netherlands)

    Sobral Romao, J.I.; Baiao Barata, David; Habibovic, Pamela; Mul, Guido; Baltrusaitis, Jonas

    2014-01-01

    We present a novel high throughput photocatalyst efficiency assessment method based on 96-well microplates and UV-Vis spectroscopy. We demonstrate the reproducibility of the method using methyl orange (MO) decomposition, and compare kinetic data obtained with those provided in the literature for

  6. High-throughput scoring of seed germination

    NARCIS (Netherlands)

    Ligterink, Wilco; Hilhorst, Henk W.M.

    2017-01-01

    High-throughput analysis of seed germination for phenotyping large genetic populations or mutant collections is very labor intensive and would highly benefit from an automated setup. Although very often used, the total germination percentage after a nominated period of time is not very

  7. A programmable, scalable-throughput interleaver

    NARCIS (Netherlands)

    Rijshouwer, E.J.C.; Berkel, van C.H.

    2010-01-01

    The interleaver stages of digital communication standards show a surprisingly large variation in throughput, state sizes, and permutation functions. Furthermore, data rates for 4G standards such as LTE-Advanced will exceed typical baseband clock frequencies of handheld devices. Multistream operation

  8. Refined reservoir description to maximize oil recovery

    International Nuclear Information System (INIS)

    Flewitt, W.E.

    1975-01-01

    To assure maximized oil recovery from older pools, reservoir description has been advanced by fully integrating original open-hole logs and the recently introduced interpretive techniques made available through cased-hole wireline saturation logs. A refined reservoir description utilizing normalized original wireline porosity logs has been completed in the Judy Creek Beaverhill Lake ''A'' Pool, a reefal carbonate pool with current potential productivity of 100,000 BOPD and 188 active wells. Continuous porosity was documented within a reef rim and cap while discontinuous porous lenses characterized an interior lagoon. With the use of pulsed neutron logs and production data a separate water front and pressure response was recognized within discrete environmental units. The refined reservoir description aided in reservoir simulation model studies and quantifying pool performance. A pattern water flood has now replaced the original peripheral bottom water drive to maximize oil recovery

  9. Global parameter optimization for maximizing radioisotope detection probabilities at fixed false alarm rates

    Energy Technology Data Exchange (ETDEWEB)

    Portnoy, David, E-mail: david.portnoy@jhuapl.edu [Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States); Feuerbach, Robert; Heimberg, Jennifer [Johns Hopkins University Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States)

    2011-10-01

    Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the 'threat' set of

  10. Global parameter optimization for maximizing radioisotope detection probabilities at fixed false alarm rates

    International Nuclear Information System (INIS)

    Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer

    2011-01-01

    Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the 'threat' set of spectra

  11. Global parameter optimization for maximizing radioisotope detection probabilities at fixed false alarm rates

    Science.gov (United States)

    Portnoy, David; Feuerbach, Robert; Heimberg, Jennifer

    2011-10-01

    Today there is a tremendous amount of interest in systems that can detect radiological or nuclear threats. Many of these systems operate in extremely high throughput situations where delays caused by false alarms can have a significant negative impact. Thus, calculating the tradeoff between detection rates and false alarm rates is critical for their successful operation. Receiver operating characteristic (ROC) curves have long been used to depict this tradeoff. The methodology was first developed in the field of signal detection. In recent years it has been used increasingly in machine learning and data mining applications. It follows that this methodology could be applied to radiological/nuclear threat detection systems. However many of these systems do not fit into the classic principles of statistical detection theory because they tend to lack tractable likelihood functions and have many parameters, which, in general, do not have a one-to-one correspondence with the detection classes. This work proposes a strategy to overcome these problems by empirically finding parameter values that maximize the probability of detection for a selected number of probabilities of false alarm. To find these parameter values a statistical global optimization technique that seeks to estimate portions of a ROC curve is proposed. The optimization combines elements of simulated annealing with elements of genetic algorithms. Genetic algorithms were chosen because they can reduce the risk of getting stuck in local minima. However classic genetic algorithms operate on arrays of Booleans values or bit strings, so simulated annealing is employed to perform mutation in the genetic algorithm. The presented initial results were generated using an isotope identification algorithm developed at Johns Hopkins University Applied Physics Laboratory. The algorithm has 12 parameters: 4 real-valued and 8 Boolean. A simulated dataset was used for the optimization study; the "threat" set of spectra

  12. Derivative pricing based on local utility maximization

    OpenAIRE

    Jan Kallsen

    2002-01-01

    This paper discusses a new approach to contingent claim valuation in general incomplete market models. We determine the neutral derivative price which occurs if investors maximize their local utility and if derivative demand and supply are balanced. We also introduce the sensitivity process of a contingent claim. This process quantifies the reliability of the neutral derivative price and it can be used to construct price bounds. Moreover, it allows to calibrate market models in order to be co...

  13. Control of Shareholders’ Wealth Maximization in Nigeria

    OpenAIRE

    A. O. Oladipupo; C. O. Okafor

    2014-01-01

    This research focuses on who controls shareholder’s wealth maximization and how does this affect firm’s performance in publicly quoted non-financial companies in Nigeria. The shareholder fund was the dependent while explanatory variables were firm size (proxied by log of turnover), retained earning (representing management control) and dividend payment (representing measure of shareholders control). The data used for this study were obtained from the Nigerian Stock Exchange [NSE] fact book an...

  14. Definable maximal discrete sets in forcing extensions

    DEFF Research Database (Denmark)

    Törnquist, Asger Dag; Schrittesser, David

    2018-01-01

    Let  be a Σ11 binary relation, and recall that a set A is -discrete if no two elements of A are related by . We show that in the Sacks and Miller forcing extensions of L there is a Δ12 maximal -discrete set. We use this to answer in the negative the main question posed in [5] by showing...

  15. Dynamic Convex Duality in Constrained Utility Maximization

    OpenAIRE

    Li, Yusong; Zheng, Harry

    2016-01-01

    In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...

  16. Gradient Dynamics and Entropy Production Maximization

    Science.gov (United States)

    Janečka, Adam; Pavelka, Michal

    2018-01-01

    We compare two methods for modeling dissipative processes, namely gradient dynamics and entropy production maximization. Both methods require similar physical inputs-how energy (or entropy) is stored and how it is dissipated. Gradient dynamics describes irreversible evolution by means of dissipation potential and entropy, it automatically satisfies Onsager reciprocal relations as well as their nonlinear generalization (Maxwell-Onsager relations), and it has statistical interpretation. Entropy production maximization is based on knowledge of free energy (or another thermodynamic potential) and entropy production. It also leads to the linear Onsager reciprocal relations and it has proven successful in thermodynamics of complex materials. Both methods are thermodynamically sound as they ensure approach to equilibrium, and we compare them and discuss their advantages and shortcomings. In particular, conditions under which the two approaches coincide and are capable of providing the same constitutive relations are identified. Besides, a commonly used but not often mentioned step in the entropy production maximization is pinpointed and the condition of incompressibility is incorporated into gradient dynamics.

  17. A High-Throughput SU-8Microfluidic Magnetic Bead Separator

    DEFF Research Database (Denmark)

    Bu, Minqiang; Christensen, T. B.; Smistrup, Kristian

    2007-01-01

    We present a novel microfluidic magnetic bead separator based on SU-8 fabrication technique for high through-put applications. The experimental results show that magnetic beads can be captured at an efficiency of 91 % and 54 % at flow rates of 1 mL/min and 4 mL/min, respectively. Integration...... of soft magnetic elements in the chip leads to a slightly higher capturing efficiency and a more uniform distribution of captured beads over the separation chamber than the system without soft magnetic elements....

  18. Creation of a small high-throughput screening facility.

    Science.gov (United States)

    Flak, Tod

    2009-01-01

    The creation of a high-throughput screening facility within an organization is a difficult task, requiring a substantial investment of time, money, and organizational effort. Major issues to consider include the selection of equipment, the establishment of data analysis methodologies, and the formation of a group having the necessary competencies. If done properly, it is possible to build a screening system in incremental steps, adding new pieces of equipment and data analysis modules as the need grows. Based upon our experience with the creation of a small screening service, we present some guidelines to consider in planning a screening facility.

  19. High Throughput WAN Data Transfer with Hadoop-based Storage

    Science.gov (United States)

    Amin, A.; Bockelman, B.; Letts, J.; Levshina, T.; Martin, T.; Pi, H.; Sfiligoi, I.; Thomas, M.; Wüerthwein, F.

    2011-12-01

    Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.

  20. High Throughput WAN Data Transfer with Hadoop-based Storage

    International Nuclear Information System (INIS)

    Amin, A; Thomas, M; Bockelman, B; Letts, J; Martin, T; Pi, H; Sfiligoi, I; Wüerthwein, F; Levshina, T

    2011-01-01

    Hadoop distributed file system (HDFS) is becoming more popular in recent years as a key building block of integrated grid storage solution in the field of scientific computing. Wide Area Network (WAN) data transfer is one of the important data operations for large high energy physics experiments to manage, share and process datasets of PetaBytes scale in a highly distributed grid computing environment. In this paper, we present the experience of high throughput WAN data transfer with HDFS-based Storage Element. Two protocols, GridFTP and fast data transfer (FDT), are used to characterize the network performance of WAN data transfer.

  1. High throughput screening method for assessing heterogeneity of microorganisms

    NARCIS (Netherlands)

    Ingham, C.J.; Sprenkels, A.J.; van Hylckama Vlieg, J.E.T.; Bomer, Johan G.; de Vos, W.M.; van den Berg, Albert

    2006-01-01

    The invention relates to the field of microbiology. Provided is a method which is particularly powerful for High Throughput Screening (HTS) purposes. More specific a high throughput method for determining heterogeneity or interactions of microorganisms is provided.

  2. Application of ToxCast High-Throughput Screening and ...

    Science.gov (United States)

    Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenesis Distruptors Slide presentation at the SETAC annual meeting on High-Throughput Screening and Modeling Approaches to Identify Steroidogenssis Distruptors

  3. High Throughput PBTK: Open-Source Data and Tools for ...

    Science.gov (United States)

    Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy Presentation on High Throughput PBTK at the PBK Modelling in Risk Assessment meeting in Ispra, Italy

  4. Cardiorespiratory Coordination in Repeated Maximal Exercise

    Directory of Open Access Journals (Sweden)

    Sergi Garcia-Retortillo

    2017-06-01

    Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC

  5. A high-throughput, multi-channel photon-counting detector with picosecond timing

    CERN Document Server

    Lapington, J S; Miller, G M; Ashton, T J R; Jarron, P; Despeisse, M; Powolny, F; Howorth, J; Milnes, J

    2009-01-01

    High-throughput photon counting with high time resolution is a niche application area where vacuum tubes can still outperform solid-state devices. Applications in the life sciences utilizing time-resolved spectroscopies, particularly in the growing field of proteomics, will benefit greatly from performance enhancements in event timing and detector throughput. The HiContent project is a collaboration between the University of Leicester Space Research Centre, the Microelectronics Group at CERN, Photek Ltd., and end-users at the Gray Cancer Institute and the University of Manchester. The goal is to develop a detector system specifically designed for optical proteomics, capable of high content (multi-parametric) analysis at high throughput. The HiContent detector system is being developed to exploit this niche market. It combines multi-channel, high time resolution photon counting in a single miniaturized detector system with integrated electronics. The combination of enabling technologies; small pore microchanne...

  6. Energy Efficiency and SINR Maximization Beamformers for Spectrum Sharing With Sensing Information

    KAUST Repository

    Alabbasi, AbdulRahman; Rezki, Zouheir; Shihada, Basem

    2014-01-01

    an underlaying communication using adaptive beamforming schemes combined with sensing information to achieve optimal energy-efficient systems. The proposed schemes maximize EE and SINR metrics subject to cognitive radio and quality-of-service constraints

  7. Energy efficiency and SINR maximization beamformers for cognitive radio utilizing sensing information

    KAUST Repository

    Alabbasi, AbdulRahman; Rezki, Zouheir; Shihada, Basem

    2014-01-01

    communication using adaptive beamforming schemes combined with the sensing information to achieve an optimal energy efficient system. The proposed schemes maximize the energy efficiency and SINR metrics subject to cognitive radio and quality of service

  8. A high throughput mechanical screening device for cartilage tissue engineering.

    Science.gov (United States)

    Mohanraj, Bhavana; Hou, Chieh; Meloni, Gregory R; Cosgrove, Brian D; Dodge, George R; Mauck, Robert L

    2014-06-27

    Articular cartilage enables efficient and near-frictionless load transmission, but suffers from poor inherent healing capacity. As such, cartilage tissue engineering strategies have focused on mimicking both compositional and mechanical properties of native tissue in order to provide effective repair materials for the treatment of damaged or degenerated joint surfaces. However, given the large number design parameters available (e.g. cell sources, scaffold designs, and growth factors), it is difficult to conduct combinatorial experiments of engineered cartilage. This is particularly exacerbated when mechanical properties are a primary outcome, given the long time required for testing of individual samples. High throughput screening is utilized widely in the pharmaceutical industry to rapidly and cost-effectively assess the effects of thousands of compounds for therapeutic discovery. Here we adapted this approach to develop a high throughput mechanical screening (HTMS) system capable of measuring the mechanical properties of up to 48 materials simultaneously. The HTMS device was validated by testing various biomaterials and engineered cartilage constructs and by comparing the HTMS results to those derived from conventional single sample compression tests. Further evaluation showed that the HTMS system was capable of distinguishing and identifying 'hits', or factors that influence the degree of tissue maturation. Future iterations of this device will focus on reducing data variability, increasing force sensitivity and range, as well as scaling-up to even larger (96-well) formats. This HTMS device provides a novel tool for cartilage tissue engineering, freeing experimental design from the limitations of mechanical testing throughput. © 2013 Published by Elsevier Ltd.

  9. High throughput static and dynamic small animal imaging using clinical PET/CT: potential preclinical applications

    International Nuclear Information System (INIS)

    Aide, Nicolas; Desmonts, Cedric; Agostini, Denis; Bardet, Stephane; Bouvard, Gerard; Beauregard, Jean-Mathieu; Roselt, Peter; Neels, Oliver; Beyer, Thomas; Kinross, Kathryn; Hicks, Rodney J.

    2010-01-01

    The objective of the study was to evaluate state-of-the-art clinical PET/CT technology in performing static and dynamic imaging of several mice simultaneously. A mouse-sized phantom was imaged mimicking simultaneous imaging of three mice with computation of recovery coefficients (RCs) and spillover ratios (SORs). Fifteen mice harbouring abdominal or subcutaneous tumours were imaged on clinical PET/CT with point spread function (PSF) reconstruction after injection of [18F]fluorodeoxyglucose or [18F]fluorothymidine. Three of these mice were imaged alone and simultaneously at radial positions -5, 0 and 5 cm. The remaining 12 tumour-bearing mice were imaged in groups of 3 to establish the quantitative accuracy of PET data using ex vivo gamma counting as the reference. Finally, a dynamic scan was performed in three mice simultaneously after the injection of 68 Ga-ethylenediaminetetraacetic acid (EDTA). For typical lesion sizes of 7-8 mm phantom experiments indicated RCs of 0.42 and 0.76 for ordered subsets expectation maximization (OSEM) and PSF reconstruction, respectively. For PSF reconstruction, SOR air and SOR water were 5.3 and 7.5%, respectively. A strong correlation (r 2 = 0.97, p 2 = 0.98; slope = 0.89, p 2 = 0.96; slope = 0.62, p 68 Ga-EDTA dynamic acquisition. New generation clinical PET/CT can be used for simultaneous imaging of multiple small animals in experiments requiring high throughput and where a dedicated small animal PET system is not available. (orig.)

  10. A Fully Automated High-Throughput Zebrafish Behavioral Ototoxicity Assay.

    Science.gov (United States)

    Todd, Douglas W; Philip, Rohit C; Niihori, Maki; Ringle, Ryan A; Coyle, Kelsey R; Zehri, Sobia F; Zabala, Leanne; Mudery, Jordan A; Francis, Ross H; Rodriguez, Jeffrey J; Jacob, Abraham

    2017-08-01

    Zebrafish animal models lend themselves to behavioral assays that can facilitate rapid screening of ototoxic, otoprotective, and otoregenerative drugs. Structurally similar to human inner ear hair cells, the mechanosensory hair cells on their lateral line allow the zebrafish to sense water flow and orient head-to-current in a behavior called rheotaxis. This rheotaxis behavior deteriorates in a dose-dependent manner with increased exposure to the ototoxin cisplatin, thereby establishing itself as an excellent biomarker for anatomic damage to lateral line hair cells. Building on work by our group and others, we have built a new, fully automated high-throughput behavioral assay system that uses automated image analysis techniques to quantify rheotaxis behavior. This novel system consists of a custom-designed swimming apparatus and imaging system consisting of network-controlled Raspberry Pi microcomputers capturing infrared video. Automated analysis techniques detect individual zebrafish, compute their orientation, and quantify the rheotaxis behavior of a zebrafish test population, producing a powerful, high-throughput behavioral assay. Using our fully automated biological assay to test a standardized ototoxic dose of cisplatin against varying doses of compounds that protect or regenerate hair cells may facilitate rapid translation of candidate drugs into preclinical mammalian models of hearing loss.

  11. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  12. Deconstructing facts and frames in energy research: Maxims for evaluating contentious problems

    International Nuclear Information System (INIS)

    Sovacool, Benjamin K.; Brown, Marilyn A.

    2015-01-01

    In this article, we argue that assumptions and values can play a combative, corrosive role in the generation of objective energy analysis. We then propose six maxims for energy analysts and researchers. Our maxim of information asks readers to keep up to date on trends in energy resources and technology. Our maxim of inclusivity asks readers to involve citizens and other public actors more in energy decisions. Our maxim of symmetry asks readers to keep their analysis of energy technologies centered always on both technology and society. Our maxim of reflexivity asks readers to be self-aware of one's assumptions. Our maxim of prudence asks readers to make energy decisions that are ethical or at least informed. Our maxim of agnosticism asks readers to look beyond a given energy technology to the services it provides and recognize that many systems can provide a desired service. We conclude that decisions in energy are justified by, if not predicated on, beliefs—beliefs which may or may not be supported by objective data, constantly blurring the line between fact, fiction, and frames. - Highlights: • Assumptions and values can play a combative, corrosive role in the generation of objective energy analysis. • Decisions in energy are justified by, if not predicated on, beliefs. • We propose six maxims for energy analysts and researcher.

  13. Protocol: high throughput silica-based purification of RNA from Arabidopsis seedlings in a 96-well format

    OpenAIRE

    Salvo-Chirnside, Eliane; Kane, Steven; Kerr, Lorraine E

    2011-01-01

    Abstract The increasing popularity of systems-based approaches to plant research has resulted in a demand for high throughput (HTP) methods to be developed. RNA extraction from multiple samples in an experiment is a significant bottleneck in performing systems-level genomic studies. Therefore we have established a high throughput method of RNA extraction from Arabidopsis thaliana to facilitate gene expression studies in this widely used plant model. We present optimised manual and automated p...

  14. Simultaneous Budget and Buffer Size Computation for Throughput-Constrained Task Graphs

    NARCIS (Netherlands)

    Wiggers, M.H.; Bekooij, Marco Jan Gerrit; Geilen, Marc C.W.; Basten, Twan

    Modern embedded multimedia systems process multiple concurrent streams of data processing jobs. Streams often have throughput requirements. These jobs are implemented on a multiprocessor system as a task graph. Tasks communicate data over buffers, where tasks wait on sufficient space in output

  15. Ultra-high frequency ultrasound biomicroscopy and high throughput cardiovascular phenotyping in a large scale mouse mutagenesis screen

    Science.gov (United States)

    Liu, Xiaoqin; Francis, Richard; Tobita, Kimimasa; Kim, Andy; Leatherbury, Linda; Lo, Cecilia W.

    2013-02-01

    Ultrasound biomicroscopy (UBM) is ideally suited for phenotyping fetal mice for congenital heart disease (CHD), as imaging can be carried out noninvasively to provide both hemodynamic and structural information essential for CHD diagnosis. Using the UBM (Vevo 2100; 40Hz) in conjunction with the clinical ultrasound system (Acuson Sequioa C512; 15Hz), we developed a two-step screening protocol to scan thousands fetuses derived from ENU mutagenized pedigrees. A wide spectrum of CHD was detected by the UBM, which were subsequently confirmed with follow-up necropsy and histopathology examination with episcopic fluorescence image capture. CHD observed included outflow anomalies, left/right heart obstructive lesions, septal/valvular defects and cardiac situs anomalies. Meanwhile, various extracardiac defects were found, such as polydactyly, craniofacial defects, exencephaly, omphalocele-cleft palate, most of which were associated with cardiac defects. Our analyses showed the UBM was better at assessing cardiac structure and blood flow profiles, while conventional ultrasound allowed higher throughput low-resolution screening. Our study showed the integration of conventional clinical ultrasound imaging with the UBM for fetal mouse cardiovascular phenotyping can maximize the detection and recovery of CHD mutants.

  16. Postactivation potentiation biases maximal isometric strength assessment.

    Science.gov (United States)

    Lima, Leonardo Coelho Rabello; Oliveira, Felipe Bruno Dias; Oliveira, Thiago Pires; Assumpção, Claudio de Oliveira; Greco, Camila Coelho; Cardozo, Adalgiso Croscato; Denadai, Benedito Sérgio

    2014-01-01

    Postactivation potentiation (PAP) is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs). The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n = 23) performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT), time to achieve it (tPTI), contractile impulse (CI), root mean square of the electromyographic signal during PTI (RMS), and rate of torque development (RTD), in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m), RTD (746 ± 152 N·m·s(-1) versus 727 ± 158 N·m·s(-1)), and RMS (59.1 ± 12.2% RMSMAX  versus 54.8 ± 9.4% RMSMAX) were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms). We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.

  17. Achieving high data throughput in research networks

    International Nuclear Information System (INIS)

    Matthews, W.; Cottrell, L.

    2001-01-01

    After less than a year of operation, the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB. Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory (LLNL). BaBar collaborators plan to double data collection each year and export a third of the data to IN2P3. So within a few years the SLAC OC3 (155 Mbps) connection will be fully utilized by file transfer to France alone. Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical. In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed. Methods for achieving the ambitious requirements will be discussed

  18. Achieving High Data Throughput in Research Networks

    International Nuclear Information System (INIS)

    Matthews, W

    2004-01-01

    After less than a year of operation, the BaBar experiment at SLAC has collected almost 100 million particle collision events in a database approaching 165TB. Around 20 TB of data has been exported via the Internet to the BaBar regional center at IN2P3 in Lyon, France, and around 40TB of simulated data has been imported from the Lawrence Livermore National Laboratory (LLNL). BaBar collaborators plan to double data collection each year and export a third of the data to IN2P3. So within a few years the SLAC OC3 (155Mbps) connection will be fully utilized by file transfer to France alone. Upgrades to infrastructure is essential and detailed understanding of performance issues and the requirements for reliable high throughput transfers is critical. In this talk results from active and passive monitoring and direct measurements of throughput will be reviewed. Methods for achieving the ambitious requirements will be discussed

  19. Maximizing percentage depletion in solid minerals

    International Nuclear Information System (INIS)

    Tripp, J.; Grove, H.D.; McGrath, M.

    1982-01-01

    This article develops a strategy for maximizing percentage depletion deductions when extracting uranium or other solid minerals. The goal is to avoid losing percentage depletion deductions by staying below the 50% limitation on taxable income from the property. The article is divided into two major sections. The first section is comprised of depletion calculations that illustrate the problem and corresponding solutions. The last section deals with the feasibility of applying the strategy and complying with the Internal Revenue Code and appropriate regulations. Three separate strategies or appropriate situations are developed and illustrated. 13 references, 3 figures, 7 tables

  20. What currency do bumble bees maximize?

    Directory of Open Access Journals (Sweden)

    Nicholas L Charlton

    2010-08-01

    Full Text Available In modelling bumble bee foraging, net rate of energetic intake has been suggested as the appropriate currency. The foraging behaviour of honey bees is better predicted by using efficiency, the ratio of energetic gain to expenditure, as the currency. We re-analyse several studies of bumble bee foraging and show that efficiency is as good a currency as net rate in terms of predicting behaviour. We suggest that future studies of the foraging of bumble bees should be designed to distinguish between net rate and efficiency maximizing behaviour in an attempt to discover which is the more appropriate currency.

  1. New Maximal Two-distance Sets

    DEFF Research Database (Denmark)

    Lisonek, Petr

    1996-01-01

    A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...

  2. Maximizing policy learning in international committees

    DEFF Research Database (Denmark)

    Nedergaard, Peter

    2007-01-01

    , this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...

  3. Pouliot type duality via a-maximization

    International Nuclear Information System (INIS)

    Kawano, Teruhiko; Ookouchi, Yutaka; Tachikawa, Yuji; Yagi, Futoshi

    2006-01-01

    We study four-dimensional N=1Spin(10) gauge theory with a single spinor and N Q vectors at the superconformal fixed point via the electric-magnetic duality and a-maximization. When gauge invariant chiral primary operators hit the unitarity bounds, we find that the theory with no superpotential is identical to the one with some superpotential at the infrared fixed point. The auxiliary field method in the electric theory offers a satisfying description of the infrared fixed point, which is consistent with the better picture in the magnetic theory. In particular, it gives a clear description of the emergence of new massless degrees of freedom in the electric theory

  4. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  5. A pocket device for high-throughput optofluidic holographic microscopy

    Science.gov (United States)

    Mandracchia, B.; Bianco, V.; Wang, Z.; Paturzo, M.; Bramanti, A.; Pioggia, G.; Ferraro, P.

    2017-06-01

    Here we introduce a compact holographic microscope embedded onboard a Lab-on-a-Chip (LoC) platform. A wavefront division interferometer is realized by writing a polymer grating onto the channel to extract a reference wave from the object wave impinging the LoC. A portion of the beam reaches the samples flowing along the channel path, carrying their information content to the recording device, while one of the diffraction orders from the grating acts as an off-axis reference wave. Polymeric micro-lenses are delivered forward the chip by Pyro-ElectroHydroDynamic (Pyro-EHD) inkjet printing techniques. Thus, all the required optical components are embedded onboard a pocket device, and fast, non-iterative, reconstruction algorithms can be used. We use our device in combination with a novel high-throughput technique, named Space-Time Digital Holography (STDH). STDH exploits the samples motion inside microfluidic channels to obtain a synthetic hologram, mapped in a hybrid space-time domain, and with intrinsic useful features. Indeed, a single Linear Sensor Array (LSA) is sufficient to build up a synthetic representation of the entire experiment (i.e. the STDH) with unlimited Field of View (FoV) along the scanning direction, independently from the magnification factor. The throughput of the imaging system is dramatically increased as STDH provides unlimited FoV, refocusable imaging of samples inside the liquid volume with no need for hologram stitching. To test our embedded STDH microscopy module, we counted, imaged and tracked in 3D with high-throughput red blood cells moving inside the channel volume under non ideal flow conditions.

  6. High-throughput screening to enhance oncolytic virus immunotherapy

    Directory of Open Access Journals (Sweden)

    Allan KJ

    2016-04-01

    Full Text Available KJ Allan,1,2 David F Stojdl,1–3 SL Swift1 1Children’s Hospital of Eastern Ontario (CHEO Research Institute, 2Department of Biology, Microbiology and Immunology, 3Department of Pediatrics, University of Ottawa, Ottawa, ON, Canada Abstract: High-throughput screens can rapidly scan and capture large amounts of information across multiple biological parameters. Although many screens have been designed to uncover potential new therapeutic targets capable of crippling viruses that cause disease, there have been relatively few directed at improving the efficacy of viruses that are used to treat disease. Oncolytic viruses (OVs are biotherapeutic agents with an inherent specificity for treating malignant disease. Certain OV platforms – including those based on herpes simplex virus, reovirus, and vaccinia virus – have shown success against solid tumors in advanced clinical trials. Yet, many of these OVs have only undergone minimal engineering to solidify tumor specificity, with few extra modifications to manipulate additional factors. Several aspects of the interaction between an OV and a tumor-bearing host have clear value as targets to improve therapeutic outcomes. At the virus level, these include delivery to the tumor, infectivity, productivity, oncolysis, bystander killing, spread, and persistence. At the host level, these include engaging the immune system and manipulating the tumor microenvironment. Here, we review the chemical- and genome-based high-throughput screens that have been performed to manipulate such parameters during OV infection and analyze their impact on therapeutic efficacy. We further explore emerging themes that represent key areas of focus for future research. Keywords: oncolytic, virus, screen, high-throughput, cancer, chemical, genomic, immunotherapy

  7. High Throughput Neuro-Imaging Informatics

    Directory of Open Access Journals (Sweden)

    Michael I Miller

    2013-12-01

    Full Text Available This paper describes neuroinformatics technologies at 1 mm anatomical scale based on high throughput 3D functional and structural imaging technologies of the human brain. The core is an abstract pipeline for converting functional and structural imagery into their high dimensional neuroinformatic representations index containing O(E3-E4 discriminating dimensions. The pipeline is based on advanced image analysis coupled to digital knowledge representations in the form of dense atlases of the human brain at gross anatomical scale. We demonstrate the integration of these high-dimensional representations with machine learning methods, which have become the mainstay of other fields of science including genomics as well as social networks. Such high throughput facilities have the potential to alter the way medical images are stored and utilized in radiological workflows. The neuroinformatics pipeline is used to examine cross-sectional and personalized analyses of neuropsychiatric illnesses in clinical applications as well as longitudinal studies. We demonstrate the use of high throughput machine learning methods for supporting (i cross-sectional image analysis to evaluate the health status of individual subjects with respect to the population data, (ii integration of image and non-image information for diagnosis and prognosis.

  8. Precision production: enabling deterministic throughput for precision aspheres with MRF

    Science.gov (United States)

    Maloney, Chris; Entezarian, Navid; Dumas, Paul

    2017-10-01

    Aspherical lenses offer advantages over spherical optics by improving image quality or reducing the number of elements necessary in an optical system. Aspheres are no longer being used exclusively by high-end optical systems but are now replacing spherical optics in many applications. The need for a method of production-manufacturing of precision aspheres has emerged and is part of the reason that the optics industry is shifting away from artisan-based techniques towards more deterministic methods. Not only does Magnetorheological Finishing (MRF) empower deterministic figure correction for the most demanding aspheres but it also enables deterministic and efficient throughput for series production of aspheres. The Q-flex MRF platform is designed to support batch production in a simple and user friendly manner. Thorlabs routinely utilizes the advancements of this platform and has provided results from using MRF to finish a batch of aspheres as a case study. We have developed an analysis notebook to evaluate necessary specifications for implementing quality control metrics. MRF brings confidence to optical manufacturing by ensuring high throughput for batch processing of aspheres.

  9. Software development with C++ maximizing reuse with object technology

    CERN Document Server

    Nielsen, Kjell

    2014-01-01

    Software Development with C++: Maximizing Reuse with Object Technology is about software development and object-oriented technology (OT), with applications implemented in C++. The basis for any software development project of complex systems is the process, rather than an individual method, which simply supports the overall process. This book is not intended as a general, all-encompassing treatise on OT. The intent is to provide practical information that is directly applicable to a development project. Explicit guidelines are offered for the infusion of OT into the various development phases.

  10. Maximization techniques for oilfield development profits

    International Nuclear Information System (INIS)

    Lerche, I.

    1999-01-01

    In 1981 Nind provided a quantitative procedure for estimating the optimum number of development wells to emplace on an oilfield to maximize profit. Nind's treatment assumed that there was a steady selling price, that all wells were placed in production simultaneously, and that each well's production profile was identical and a simple exponential decline with time. This paper lifts these restrictions to allow for price fluctuations, variable with time emplacement of wells, and production rates that are more in line with actual production records than is a simple exponential decline curve. As a consequence, it is possible to design production rate strategies, correlated with price fluctuations, so as to maximize the present-day worth of a field. For price fluctuations that occur on a time-scale rapid compared to inflation rates it is appropriate to have production rates correlate directly with such price fluctuations. The same strategy does not apply for price fluctuations occurring on a time-scale long compared to inflation rates where, for small amplitudes in the price fluctuations, it is best to sell as much product as early as possible to overcome inflation factors, while for large amplitude fluctuations the best strategy is to sell product as early as possible but to do so mainly on price upswings. Examples are provided to show how these generalizations of Nind's (1981) formula change the complexion of oilfield development optimization. (author)

  11. Upscaling and automation of electrophysiology: toward high throughput screening in ion channel drug discovery

    DEFF Research Database (Denmark)

    Asmild, Margit; Oswald, Nicholas; Krzywkowski, Karen M

    2003-01-01

    by developing two lines of automated patch clamp products, a traditional pipette-based system called Apatchi-1, and a silicon chip-based system QPatch. The degree of automation spans from semi-automation (Apatchi-1) where a trained technician interacts with the system in a limited way, to a complete automation...... (QPatch 96) where the system works continuously and unattended until screening of a full compound library is completed. The performance of the systems range from medium to high throughputs....

  12. Optimization of throughput in semipreparative chiral liquid chromatography using stacked injection.

    Science.gov (United States)

    Taheri, Mohammadreza; Fotovati, Mohsen; Hosseini, Seyed-Kiumars; Ghassempour, Alireza

    2017-10-01

    An interesting mode of chromatography for preparation of pure enantiomers from pure samples is the method of stacked injection as a pseudocontinuous procedure. Maximum throughput and minimal production costs can be achieved by the use of total chiral column length in this mode of chromatography. To maximize sample loading, often touching bands of the two enantiomers is automatically achieved. Conventional equations show direct correlation between touching-band loadability and the selectivity factor of two enantiomers. The important question for one who wants to obtain the highest throughput is "How to optimize different factors including selectivity, resolution, run time, and loading of the sample in order to save time without missing the touching-band resolution?" To answer this question, tramadol and propranolol were separated on cellulose 3,5-dimethyl phenyl carbamate, as two pure racemic mixtures with low and high solubilities in mobile phase, respectively. The mobile phase composition consisted of n-hexane solvent with alcohol modifier and diethylamine as the additive. A response surface methodology based on central composite design was used to optimize separation factors against the main responses. According to the stacked injection properties, two processes were investigated for maximizing throughput: one with a poorly soluble and another with a highly soluble racemic mixture. For each case, different optimization possibilities were inspected. It was revealed that resolution is a crucial response for separations of this kind. Peak area and run time are two critical parameters in optimization of stacked injection for binary mixtures which have low solubility in the mobile phase. © 2017 Wiley Periodicals, Inc.

  13. High Throughput In Situ XAFS Screening of Catalysts

    International Nuclear Information System (INIS)

    Tsapatsaris, Nikolaos; Beesley, Angela M.; Weiher, Norbert; Tatton, Helen; Schroeder, Sven L. M.; Dent, Andy J.; Mosselmans, Frederick J. W.; Tromp, Moniek; Russu, Sergio; Evans, John; Harvey, Ian; Hayama, Shu

    2007-01-01

    We outline and demonstrate the feasibility of high-throughput (HT) in situ XAFS for synchrotron radiation studies. An XAS data acquisition and control system for the analysis of dynamic materials libraries under control of temperature and gaseous environments has been developed. The system is compatible with the 96-well industry standard and coupled to multi-stream quadrupole mass spectrometry (QMS) analysis of reactor effluents. An automated analytical workflow generates data quickly compared to traditional individual spectrum acquisition and analyses them in quasi-real time using an HT data analysis tool based on IFFEFIT. The system was used for the automated characterization of a library of 91 catalyst precursors containing ternary combinations of Cu, Pt, and Au on γ-Al2O3, and for the in situ characterization of Au catalysts supported on Al2O3 and TiO2

  14. Innovations to increase throughput of the multipurpose irradiation facility

    Energy Technology Data Exchange (ETDEWEB)

    Cabalfin, Estelita G; Lanuza, Luvimina G; Maningas, Aurelio L; Solomon, Haydee M [Irradiation Services Unit, Nuclear Services and Training Division, Philippine Nuclear Research Institute, Quezon City (Philippines)

    1998-07-01

    With the installation and operation of the PNRI [Philippine Nuclear Research Institute] multipurpose irradiation facility, several local industries are now aware of, and in fact using gamma radiation for sterilization or decontamination of medical and pharmaceutical products, packaging materials and for food preservation. However, the multipurpose irradiation facility has limited capacity and capability, since this was designed as a pilot scale irradiator for research and development. To meet the increasing demand of gamma irradiation service, a new product handling system was locally designed, fabricated and installed. Performance, in terms of total loading and more importantly, radiation dose distribution of the new product handling system, was evaluated. An increase in product throughput was realized effectively with the new product handling system. (Author)

  15. Innovations to increase throughput of the multipurpose irradiation facility

    International Nuclear Information System (INIS)

    Cabalfin, Estelita G.; Lanuza, Luvimina G.; Maningas, Aurelio L.; Solomon, Haydee M.

    1998-01-01

    With the installation and operation of the PNRI [Philippine Nuclear Research Institute] multipurpose irradiation facility, several local industries are now aware of, and in fact using gamma radiation for sterilization or decontamination of medical and pharmaceutical products, packaging materials and for food preservation. However, the multipurpose irradiation facility has limited capacity and capability, since this was designed as a pilot scale irradiator for research and development. To meet the increasing demand of gamma irradiation service, a new product handling system was locally designed, fabricated and installed. Performance, in terms of total loading and more importantly, radiation dose distribution of the new product handling system, was evaluated. An increase in product throughput was realized effectively with the new product handling system. (Author)

  16. Maximal overlap with a fully separable state and translational invariance for multipartite entangled states

    International Nuclear Information System (INIS)

    Cui, H. T.; Yuan Di; Tian, J. L.

    2010-01-01

    The maximal overlap with the fully separable state for the multipartite entangled pure state with translational invariance is studied explicitly by some exact and numerical evaluations, focusing on the one-dimensional qubit system and some representative types of translational invariance. The results show that the translational invariance of the multipartite state could have an intrinsic effect on the determination of the maximal overlap and the nearest fully separable state for multipartite entangled states. Furthermore, a hierarchy of the basic entangled states with translational invariance is found, from which one could readily find the maximal overlap and a related fully separable state for the multipartite state composed of different translational invariance structures.

  17. Application of an Elman neural network to the problem of predicting the throughput of a petroleum collecting station; Previsao da vazao de uma estacao coletora de petroleo utilizando redes neurais de Elman

    Energy Technology Data Exchange (ETDEWEB)

    Paula, Wesley R. de [Universidade Federal de Campina Grande (UFCG), PB (Brazil). Curso de Pos-Graduacao em Informatica; Sousa, Andre G. de [Universidade Federal de Campina Grande (UFCG), PB (Brazil). Curso de Ciencia da Computacao; Gomes, Herman M.; Galvao, Carlos de O. [Universidade Federal de Campina Grande (UFCG), PB (Brazil)

    2004-07-01

    The objective of this paper is to present an initial study on the application of an Elman Neural Network to the problem of predicting the throughput of a petroleum collecting station. This study is part of a wider project, which aims at producing an automatic real-time system to remotely control a petroleum distribution pipeline, in such a way that optimum efficiency can be assured in terms of: (I) maximizing the volume of oil transported; and (II) minimizing energy consumption, risks of failures and damages to the environment. Experiments were carried out to determine the neural network parameters and to examine its performance under varying prediction times in the future. Promising results (with low MSE) have been obtained for predictions obtained up to 10 minutes in the future. (author)

  18. A high throughput DNA extraction method with high yield and quality

    Directory of Open Access Journals (Sweden)

    Xin Zhanguo

    2012-07-01

    Full Text Available Abstract Background Preparation of large quantity and high quality genomic DNA from a large number of plant samples is a major bottleneck for most genetic and genomic analyses, such as, genetic mapping, TILLING (Targeting Induced Local Lesion IN Genome, and next-generation sequencing directly from sheared genomic DNA. A variety of DNA preparation methods and commercial kits are available. However, they are either low throughput, low yield, or costly. Here, we describe a method for high throughput genomic DNA isolation from sorghum [Sorghum bicolor (L. Moench] leaves and dry seeds with high yield, high quality, and affordable cost. Results We developed a high throughput DNA isolation method by combining a high yield CTAB extraction method with an improved cleanup procedure based on MagAttract kit. The method yielded large quantity and high quality DNA from both lyophilized sorghum leaves and dry seeds. The DNA yield was improved by nearly 30 fold with 4 times less consumption of MagAttract beads. The method can also be used in other plant species, including cotton leaves and pine needles. Conclusion A high throughput system for DNA extraction from sorghum leaves and seeds was developed and validated. The main advantages of the method are low cost, high yield, high quality, and high throughput. One person can process two 96-well plates in a working day at a cost of $0.10 per sample of magnetic beads plus other consumables that other methods will also need.

  19. A High-Throughput Biological Calorimetry Core: Steps to Startup, Run, and Maintain a Multiuser Facility.

    Science.gov (United States)

    Yennawar, Neela H; Fecko, Julia A; Showalter, Scott A; Bevilacqua, Philip C

    2016-01-01

    Many labs have conventional calorimeters where denaturation and binding experiments are setup and run one at a time. While these systems are highly informative to biopolymer folding and ligand interaction, they require considerable manual intervention for cleaning and setup. As such, the throughput for such setups is limited typically to a few runs a day. With a large number of experimental parameters to explore including different buffers, macromolecule concentrations, temperatures, ligands, mutants, controls, replicates, and instrument tests, the need for high-throughput automated calorimeters is on the rise. Lower sample volume requirements and reduced user intervention time compared to the manual instruments have improved turnover of calorimetry experiments in a high-throughput format where 25 or more runs can be conducted per day. The cost and efforts to maintain high-throughput equipment typically demands that these instruments be housed in a multiuser core facility. We describe here the steps taken to successfully start and run an automated biological calorimetry facility at Pennsylvania State University. Scientists from various departments at Penn State including Chemistry, Biochemistry and Molecular Biology, Bioengineering, Biology, Food Science, and Chemical Engineering are benefiting from this core facility. Samples studied include proteins, nucleic acids, sugars, lipids, synthetic polymers, small molecules, natural products, and virus capsids. This facility has led to higher throughput of data, which has been leveraged into grant support, attracting new faculty hire and has led to some exciting publications. © 2016 Elsevier Inc. All rights reserved.

  20. Automated image alignment for 2D gel electrophoresis in a high-throughput proteomics pipeline.

    Science.gov (United States)

    Dowsey, Andrew W; Dunn, Michael J; Yang, Guang-Zhong

    2008-04-01

    The quest for high-throughput proteomics has revealed a number of challenges in recent years. Whilst substantial improvements in automated protein separation with liquid chromatography and mass spectrometry (LC/MS), aka 'shotgun' proteomics, have been achieved, large-scale open initiatives such as the Human Proteome Organization (HUPO) Brain Proteome Project have shown that maximal proteome coverage is only possible when LC/MS is complemented by 2D gel electrophoresis (2-DE) studies. Moreover, both separation methods require automated alignment and differential analysis to relieve the bioinformatics bottleneck and so make high-throughput protein biomarker discovery a reality. The purpose of this article is to describe a fully automatic image alignment framework for the integration of 2-DE into a high-throughput differential expression proteomics pipeline. The proposed method is based on robust automated image normalization (RAIN) to circumvent the drawbacks of traditional approaches. These use symbolic representation at the very early stages of the analysis, which introduces persistent errors due to inaccuracies in modelling and alignment. In RAIN, a third-order volume-invariant B-spline model is incorporated into a multi-resolution schema to correct for geometric and expression inhomogeneity at multiple scales. The normalized images can then be compared directly in the image domain for quantitative differential analysis. Through evaluation against an existing state-of-the-art method on real and synthetically warped 2D gels, the proposed analysis framework demonstrates substantial improvements in matching accuracy and differential sensitivity. High-throughput analysis is established through an accelerated GPGPU (general purpose computation on graphics cards) implementation. Supplementary material, software and images used in the validation are available at http://www.proteomegrid.org/rain/.