WorldWideScience

Sample records for ann-based energy reconstruction

  1. Hybrid LSA-ANN Based Home Energy Management Scheduling Controller for Residential Demand Response Strategy

    Directory of Open Access Journals (Sweden)

    Maytham S. Ahmed

    2016-09-01

    Full Text Available Demand response (DR program can shift peak time load to off-peak time, thereby reducing greenhouse gas emissions and allowing energy conservation. In this study, the home energy management scheduling controller of the residential DR strategy is proposed using the hybrid lightning search algorithm (LSA-based artificial neural network (ANN to predict the optimal ON/OFF status for home appliances. Consequently, the scheduled operation of several appliances is improved in terms of cost savings. In the proposed approach, a set of the most common residential appliances are modeled, and their activation is controlled by the hybrid LSA-ANN based home energy management scheduling controller. Four appliances, namely, air conditioner, water heater, refrigerator, and washing machine (WM, are developed by Matlab/Simulink according to customer preferences and priority of appliances. The ANN controller has to be tuned properly using suitable learning rate value and number of nodes in the hidden layers to schedule the appliances optimally. Given that finding proper ANN tuning parameters is difficult, the LSA optimization is hybridized with ANN to improve the ANN performances by selecting the optimum values of neurons in each hidden layer and learning rate. Therefore, the ON/OFF estimation accuracy by ANN can be improved. Results of the hybrid LSA-ANN are compared with those of hybrid particle swarm optimization (PSO based ANN to validate the developed algorithm. Results show that the hybrid LSA-ANN outperforms the hybrid PSO based ANN. The proposed scheduling algorithm can significantly reduce the peak-hour energy consumption during the DR event by up to 9.7138% considering four appliances per 7-h period.

  2. Strategic planning for minimizing CO2 emissions using LP model based on forecasted energy demand by PSO Algorithm and ANN

    Energy Technology Data Exchange (ETDEWEB)

    Yousefi, M.; Omid, M.; Rafiee, Sh. [Department of Agricultural Machinery Engineering, University of Tehran, Karaj (Iran, Islamic Republic of); Ghaderi, S. F. [Department of Industrial Engineering, University of Tehran, Tehran (Iran, Islamic Republic of)

    2013-07-01

    Iran's primary energy consumption (PEC) was modeled as a linear function of five socioeconomic and meteorological explanatory variables using particle swarm optimization (PSO) and artificial neural networks (ANNs) techniques. Results revealed that ANN outperforms PSO model to predict test data. However, PSO technique is simple and provided us with a closed form expression to forecast PEC. Energy demand was forecasted by PSO and ANN using represented scenario. Finally, adapting about 10% renewable energy revealed that based on the developed linear programming (LP) model under minimum CO2 emissions, Iran will emit about 2520 million metric tons CO2 in 2025. The LP model indicated that maximum possible development of hydropower, geothermal and wind energy resources will satisfy the aim of minimization of CO2 emissions. Therefore, the main strategic policy in order to reduce CO2 emissions would be exploitation of these resources.

  3. Strategic planning for minimizing CO2 emissions using LP model based on forecasted energy demand by PSO Algorithm and ANN

    Energy Technology Data Exchange (ETDEWEB)

    Yousefi, M.; Omid, M.; Rafiee, Sh. [Department of Agricultural Machinery Engineering, University of Tehran, Karaj (Iran, Islamic Republic of); Ghaderi, S.F. [Department of Industrial Engineering, University of Tehran, Tehran (Iran, Islamic Republic of)

    2013-07-01

    Iran's primary energy consumption (PEC) was modeled as a linear function of five socioeconomic and meteorological explanatory variables using particle swarm optimization (PSO) and artificial neural networks (ANNs) techniques. Results revealed that ANN outperforms PSO model to predict test data. However, PSO technique is simple and provided us with a closed form expression to forecast PEC. Energy demand was forecasted by PSO and ANN using represented scenario. Finally, adapting about 10% renewable energy revealed that based on the developed linear programming (LP) model under minimum CO2 emissions, Iran will emit about 2520 million metric tons CO2 in 2025. The LP model indicated that maximum possible development of hydropower, geothermal and wind energy resources will satisfy the aim of minimization of CO2 emissions. Therefore, the main strategic policy in order to reduce CO2 emissions would be exploitation of these resources.

  4. Application of back-propagation artificial neural network (ANN) to predict crystallite size and band gap energy of ZnO quantum dots

    Science.gov (United States)

    Pelicano, Christian Mark; Rapadas, Nick; Cagatan, Gerard; Magdaluyo, Eduardo

    2017-12-01

    Herein, the crystallite size and band gap energy of zinc oxide (ZnO) quantum dots were predicted using artificial neural network (ANN). Three input factors including reagent ratio, growth time, and growth temperature were examined with respect to crystallite size and band gap energy as response factors. The generated results from neural network model were then compared with the experimental results. Experimental crystallite size and band gap energy of ZnO quantum dots were measured from TEM images and absorbance spectra, respectively. The Levenberg-Marquardt (LM) algorithm was used as the learning algorithm for the ANN model. The performance of the ANN model was then assessed through mean square error (MSE) and regression values. Based on the results, the ANN modelling results are in good agreement with the experimental data.

  5. Modeling of policies for reduction of GHG emissions in energy sector using ANN: case study-Croatia (EU).

    Science.gov (United States)

    Bolanča, Tomislav; Strahovnik, Tomislav; Ukić, Šime; Stankov, Mirjana Novak; Rogošić, Marko

    2017-07-01

    This study describes the development of tool for testing different policies for reduction of greenhouse gas (GHG) emissions in energy sector using artificial neural networks (ANNs). The case study of Croatia was elaborated. Two different energy consumption scenarios were used as a base for calculations and predictions of GHG emissions: the business as usual (BAU) scenario and sustainable scenario. Both of them are based on predicted energy consumption using different growth rates; the growth rates within the second scenario resulted from the implementation of corresponding energy efficiency measures in final energy consumption and increasing share of renewable energy sources. Both ANN architecture and training methodology were optimized to produce network that was able to successfully describe the existing data and to achieve reliable prediction of emissions in a forward time sense. The BAU scenario was found to produce continuously increasing emissions of all GHGs. The sustainable scenario was found to decrease the GHG emission levels of all gases with respect to BAU. The observed decrease was attributed to the group of measures termed the reduction of final energy consumption through energy efficiency measures.

  6. Tomographic image reconstruction using Artificial Neural Networks

    International Nuclear Information System (INIS)

    Paschalis, P.; Giokaris, N.D.; Karabarbounis, A.; Loudos, G.K.; Maintas, D.; Papanicolas, C.N.; Spanoudaki, V.; Tsoumpas, Ch.; Stiliaris, E.

    2004-01-01

    A new image reconstruction technique based on the usage of an Artificial Neural Network (ANN) is presented. The most crucial factor in designing such a reconstruction system is the network architecture and the number of the input projections needed to reconstruct the image. Although the training phase requires a large amount of input samples and a considerable CPU time, the trained network is characterized by simplicity and quick response. The performance of this ANN is tested using several image patterns. It is intended to be used together with a phantom rotating table and the γ-camera of IASA for SPECT image reconstruction

  7. ANN-based wavelet analysis for predicting electrical signal from photovoltaic power supply system

    Energy Technology Data Exchange (ETDEWEB)

    Mellit, A. [Medea Univ., Medea (Algeria). Inst. of Science Engineering, Dept. of Electronics

    2007-07-01

    This study was conducted to predict different electrical signals from a photovoltaic power supply system (PVPS) using an artificial neural networks (ANN) with wavelet analysis. It involved the creation of a database of electrical signals (PV-generator current, voltage, battery current voltage, regulator current and voltage) obtained from an experimental PVPS system installed in the south of Algeria. The potential applications were for sizing and analyzing the performance of PVPS systems; control of maximum power point tracker (MPPT) in order to deliver the maximum energy from the PV-array; prediction of the optimal configuration (PV-array and battery sizing) of PVPS systems; expert configuration of PV-systems; faults diagnosis; supervision; and, control and monitoring. First, based on the wavelet analysis each electrical signal was mapped in several time frequency domains. The PV-system was then divided into 3-subsystems corresponding to ANN-PV generator model, ANN-battery model, and ANN-regulator model. An example of day-by-day prediction for each electrical signal was presented. The results of the proposed approach were in good agreement with experimental results. In addition, the accuracy of the proposed approach was more satisfactory when only ANN was used. It was concluded that this methodology offers the possibility of developing a new expert configuration of PVPS by implementing the soft computing ANN-wavelet program with a digital signal processing (DSP) circuit. 26 refs., 1 tab., 5 figs.

  8. ANN-GA based optimization of a high ash coal-fired supercritical power plant

    International Nuclear Information System (INIS)

    Suresh, M.V.J.J.; Reddy, K.S.; Kolar, Ajit Kumar

    2011-01-01

    Highlights: → Neuro-genetic power plant optimization is found to be an efficient methodology. → Advantage of neuro-genetic algorithm is the possibility of on-line optimization. → Exergy loss in combustor indicates the effect of coal composition on efficiency. -- Abstract: The efficiency of coal-fired power plant depends on various operating parameters such as main steam/reheat steam pressures and temperatures, turbine extraction pressures, and excess air ratio for a given fuel. However, simultaneous optimization of all these operating parameters to achieve the maximum plant efficiency is a challenging task. This study deals with the coupled ANN and GA based (neuro-genetic) optimization of a high ash coal-fired supercritical power plant in Indian climatic condition to determine the maximum possible plant efficiency. The power plant simulation data obtained from a flow-sheet program, 'Cycle-Tempo' is used to train the artificial neural network (ANN) to predict the energy input through fuel (coal). The optimum set of various operating parameters that result in the minimum energy input to the power plant is then determined by coupling the trained ANN model as a fitness function with the genetic algorithm (GA). A unit size of 800 MWe currently under development in India is considered to carry out the thermodynamic analysis based on energy and exergy. Apart from optimizing the design parameters, the developed model can also be used for on-line optimization when quick response is required. Furthermore, the effect of various coals on the thermodynamic performance of the optimized power plant is also determined.

  9. Feature Selection and ANN Solar Power Prediction

    Directory of Open Access Journals (Sweden)

    Daniel O’Leary

    2017-01-01

    Full Text Available A novel method of solar power forecasting for individuals and small businesses is developed in this paper based on machine learning, image processing, and acoustic classification techniques. Increases in the production of solar power at the consumer level require automated forecasting systems to minimize loss, cost, and environmental impact for homes and businesses that produce and consume power (prosumers. These new participants in the energy market, prosumers, require new artificial neural network (ANN performance tuning techniques to create accurate ANN forecasts. Input masking, an ANN tuning technique developed for acoustic signal classification and image edge detection, is applied to prosumer solar data to improve prosumer forecast accuracy over traditional macrogrid ANN performance tuning techniques. ANN inputs tailor time-of-day masking based on error clustering in the time domain. Results show an improvement in prediction to target correlation, the R2 value, lowering inaccuracy of sample predictions by 14.4%, with corresponding drops in mean average error of 5.37% and root mean squared error of 6.83%.

  10. Energy reconstruction in the long-baseline neutrino experiment.

    Science.gov (United States)

    Mosel, U; Lalakulich, O; Gallmeister, K

    2014-04-18

    The Long-Baseline Neutrino Experiment aims at measuring fundamental physical parameters to high precision and exploring physics beyond the standard model. Nuclear targets introduce complications towards that aim. We investigate the uncertainties in the energy reconstruction, based on quasielastic scattering relations, due to nuclear effects. The reconstructed event distributions as a function of energy tend to be smeared out and shifted by several 100 MeV in their oscillatory structure if standard event selection is used. We show that a more restrictive experimental event selection offers the possibility to reach the accuracy needed for a determination of the mass ordering and the CP-violating phase. Quasielastic-based energy reconstruction could thus be a viable alternative to the calorimetric reconstruction also at higher energies.

  11. ANN based Performance Evaluation of BDI for Condition Monitoring of Induction Motor Bearings

    Science.gov (United States)

    Patel, Raj Kumar; Giri, V. K.

    2017-06-01

    One of the critical parts in rotating machines is bearings and most of the failure arises from the defective bearings. Bearing failure leads to failure of a machine and the unpredicted productivity loss in the performance. Therefore, bearing fault detection and prognosis is an integral part of the preventive maintenance procedures. In this paper vibration signal for four conditions of a deep groove ball bearing; normal (N), inner race defect (IRD), ball defect (BD) and outer race defect (ORD) were acquired from a customized bearing test rig, under four different conditions and three different fault sizes. Two approaches have been opted for statistical feature extraction from the vibration signal. In the first approach, raw signal is used for statistical feature extraction and in the second approach statistical features extracted are based on bearing damage index (BDI). The proposed BDI technique uses wavelet packet node energy coefficients analysis method. Both the features are used as inputs to an ANN classifier to evaluate its performance. A comparison of ANN performance is made based on raw vibration data and data chosen by using BDI. The ANN performance has been found to be fairly higher when BDI based signals were used as inputs to the classifier.

  12. Modelling the spectral irradiance distribution in sunny inland locations using an ANN-based methodology

    International Nuclear Information System (INIS)

    Torres-Ramírez, M.; Elizondo, D.; García-Domingo, B.; Nofuentes, G.; Talavera, D.L.

    2015-01-01

    This work is aimed at verifying that in sunny inland locations artificial intelligence techniques may provide an estimation of the spectral irradiance with adequate accuracy for photovoltaic applications. An ANN (artificial neural network) based method was developed, trained and tested to model the spectral distributions between wavelengths ranging from 350 to 1050 nm. Only commonly available input data such as geographical information regarding location, specific date and time together with horizontal global irradiance and ambient temperature are required. Historical information from a 24-month experimental campaign carried out in Jaén (Spain) provided the necessary data to train and test the ANN tool. A Kohonen self-organized map was used as innovative technique to classify the whole input dataset and build a small and representative training dataset. The shape of the spectral irradiance distribution, the in-plane global irradiance (G T ) and irradiation (H T ) and the APE (average photon energy) values obtained through the ANN method were statistically compared to the experimental ones. In terms of shape distribution fitting, the mean relative deformation error stays below 4.81%. The root mean square percentage error is around 6.89% and 0.45% when estimating G T and APE, respectively. Regarding H T , errors lie below 3.18% in all cases. - Highlights: • ANN-based model to estimate the spectral irradiance distribution in sunny inland locations. • MRDE value stay below 4.81% in spectral irradiance distribution shape fitting. • RMSPE is about 6.89% for the in-plane global irradiance and 0.45% for the average photon energy. • Errors stay below 3.18% for all the months of the year in incident irradiation terms. • Improvement of assessment of the impact of the solar spectrum in the performance of a PV module

  13. Electrons for Neutrinos: Using Electron Scattering to Develop New Energy Reconstruction for Future Deuterium-Based Neutrino Detectors

    Science.gov (United States)

    Silva, Adrian; Schmookler, Barak; Papadopoulou, Afroditi; Schmidt, Axel; Hen, Or; Khachatryan, Mariana; Weinstein, Lawrence

    2017-09-01

    Using wide phase-space electron scattering data, we study a novel technique for neutrino energy reconstruction for future neutrino oscillation experiments. Accelerator-based neutrino oscillation experiments require detailed understanding of neutrino-nucleus interactions, which are complicated by the underlying nuclear physics that governs the process. One area of concern is that neutrino energy must be reconstructed event-by-event from the final-state kinematics. In charged-current quasielastic scattering, Fermi motion of nucleons prevents exact energy reconstruction. However, in scattering from deuterium, the momentum of the electron and proton constrain the neutrino energy exactly, offering a new avenue for reducing systematic uncertainties. To test this approach, we analyzed d (e ,e' p) data taken with the CLAS detector at Jefferson Lab Hall B and made kinematic selection cuts to obtain quasielastic events. We estimated the remaining inelastic background by using d (e ,e' pπ-) events to produce a simulated dataset of events with an undetected π-. These results demonstrate the feasibility of energy reconstruction in a hypothetical future deuterium-based neutrino detector. Supported by the Paul E. Gray UROP Fund, MIT.

  14. Final Technical Report, Wind Generator Project (Ann Arbor)

    Energy Technology Data Exchange (ETDEWEB)

    Geisler, Nathan [City of Ann Arbor, MI (United States)

    2017-03-20

    A Final Technical Report (57 pages) describing educational exhibits and devices focused on wind energy, and related outreach activities and programs. Project partnership includes the City of Ann Arbor, MI and the Ann Arbor Hands-on Museum, along with additional sub-recipients, and U.S. Department of Energy/Office of Energy Efficiency and Renewable Energy (EERE). Report relays key milestones and sub-tasks as well as numerous graphics and images of five (5) transportable wind energy demonstration devices and five (5) wind energy exhibits designed and constructed between 2014 and 2016 for transport and use by the Ann Arbor Hands-on Museum.

  15. A Hybrid Model Based on Wavelet Decomposition-Reconstruction in Track Irregularity State Forecasting

    Directory of Open Access Journals (Sweden)

    Chaolong Jia

    2015-01-01

    Full Text Available Wavelet is able to adapt to the requirements of time-frequency signal analysis automatically and can focus on any details of the signal and then decompose the function into the representation of a series of simple basis functions. It is of theoretical and practical significance. Therefore, this paper does subdivision on track irregularity time series based on the idea of wavelet decomposition-reconstruction and tries to find the best fitting forecast model of detail signal and approximate signal obtained through track irregularity time series wavelet decomposition, respectively. On this ideology, piecewise gray-ARMA recursive based on wavelet decomposition and reconstruction (PG-ARMARWDR and piecewise ANN-ARMA recursive based on wavelet decomposition and reconstruction (PANN-ARMARWDR models are proposed. Comparison and analysis of two models have shown that both these models can achieve higher accuracy.

  16. Direct reconstruction of dark energy.

    Science.gov (United States)

    Clarkson, Chris; Zunckel, Caroline

    2010-05-28

    An important issue in cosmology is reconstructing the effective dark energy equation of state directly from observations. With so few physically motivated models, future dark energy studies cannot only be based on constraining a dark energy parameter space. We present a new nonparametric method which can accurately reconstruct a wide variety of dark energy behavior with no prior assumptions about it. It is simple, quick and relatively accurate, and involves no expensive explorations of parameter space. The technique uses principal component analysis and a combination of information criteria to identify real features in the data, and tailors the fitting functions to pick up trends and smooth over noise. We find that we can constrain a large variety of w(z) models to within 10%-20% at redshifts z≲1 using just SNAP-quality data.

  17. Applying a supervised ANN (artificial neural network) approach to the prognostication of driven wheel energy efficiency indices

    International Nuclear Information System (INIS)

    Taghavifar, Hamid; Mardani, Aref

    2014-01-01

    This paper examines the prediction of energy efficiency indices of driven wheels (i.e. traction coefficient and tractive power efficiency) as affected by wheel load, slippage and forward velocity at three different levels with three replicates to form a total of 162 data points. The pertinent experiments were carried out in the soil bin testing facility. A feed-forward ANN (artificial neural network) with standard BP (back propagation) algorithm was practiced to construct a supervised representation to predict the energy efficiency indices of driven wheels. It was deduced, in view of the statistical performance criteria (i.e. MSE (mean squared error) and R 2 ), that a supervised ANN with 3-8-10-2 topology and Levenberg–Marquardt training algorithm represented the optimal model. Modeling implementations indicated that ANN is a powerful technique to prognosticate the stochastic energy efficiency indices as affected by soil-wheel interactions with MSE of 0.001194 and R 2 of 0.987 and 0.9772 for traction coefficient and tractive power efficiency. It was found that traction coefficient and tractive power efficiency increase with increased slippage. A similar trend is valid for the influence of wheel load on the objective parameters. Wherein increase of velocity led to an increment of tractive power efficiency, velocity had no significant effect on traction coefficient. - Highlights: • Energy efficiency indexes were assessed as affected by tire parameters. • ANN was applied for prognostication of the objective parameters. • A 3-8-10-2 ANN with MSE of 0.001194 and R 2 of 0.987 and 0.9772 was designated as optimal model. • Optimal values of learning rate and momentum were found 0.9 and 0.5, respectively

  18. Strategy on energy saving reconstruction of distribution networks based on life cycle cost

    Science.gov (United States)

    Chen, Xiaofei; Qiu, Zejing; Xu, Zhaoyang; Xiao, Chupeng

    2017-08-01

    Because the actual distribution network reconstruction project funds are often limited, the cost-benefit model and the decision-making method are crucial for distribution network energy saving reconstruction project. From the perspective of life cycle cost (LCC), firstly the research life cycle is determined for the energy saving reconstruction of distribution networks with multi-devices. Then, a new life cycle cost-benefit model for energy-saving reconstruction of distribution network is developed, in which the modification schemes include distribution transformers replacement, lines replacement and reactive power compensation. In the operation loss cost and maintenance cost area, the operation cost model considering the influence of load season characteristics and the maintenance cost segmental model of transformers are proposed. Finally, aiming at the highest energy saving profit per LCC, a decision-making method is developed while considering financial and technical constraints as well. The model and method are applied to a real distribution network reconstruction, and the results prove that the model and method are effective.

  19. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    Science.gov (United States)

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  20. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  1. HAWC Energy Reconstruction via Neural Network

    Science.gov (United States)

    Marinelli, Samuel; HAWC Collaboration

    2016-03-01

    The High-Altitude Water-Cherenkov (HAWC) γ-ray observatory is located at 4100 m above sea level on the Sierra Negra mountain in the state of Puebla, Mexico. Its 300 water-filled tanks are instrumented with PMTs that detect Cherenkov light produced by charged particles in atmospheric air showers induced by TeV γ-rays. The detector became fully operational in March of 2015. With a 2-sr field of view and duty cycle exceeding 90%, HAWC is a survey instrument sensitive to diverse γ-ray sources, including supernova remnants, pulsar wind nebulae, active galactic nuclei, and others. Particle-acceleration mechanisms at these sources can be inferred by studying their energy spectra, particularly at high energies. We have developed a technique for estimating primary- γ-ray energies using an artificial neural network (ANN). Input variables to the ANN are selected to characterize shower multiplicity in the detector, the fraction of the shower contained in the detector, and atmospheric attenuation of the shower. Monte Carlo simulations show that the new estimator has superior performance to the current estimator used in HAWC publications. This work was supported by the National Science Foundation.

  2. Energy reconstruction of hadrons in highly granular combined ECAL and HCAL systems

    Science.gov (United States)

    Israeli, Y.

    2018-05-01

    This paper discusses the hadronic energy reconstruction of two combined electromagnetic and hadronic calorimeter systems using physics prototypes of the CALICE collaboration: the silicon-tungsten electromagnetic calorimeter (Si-W ECAL) and the scintillator-SiPM based analog hadron calorimeter (AHCAL); and the scintillator-tungsten electromagnetic calorimeter (ScECAL) and the AHCAL. These systems were operated in hadron beams at CERN and FNAL, permitting the study of the performance in combined ECAL and HCAL systems. Two techniques for the energy reconstruction are used, a standard reconstruction based on calibrated sub-detector energy sums, and one based on a software compensation algorithm making use of the local energy density information provided by the high granularity of the detectors. The software compensation-based algorithm improves the hadronic energy resolution by up to 30% compared to the standard reconstruction. The combined system data show comparable energy resolutions to the one achieved for data with showers starting only in the AHCAL and therefore demonstrate the success of the inter-calibration of the different sub-systems, despite of their different geometries and different readout technologies.

  3. A Sensitive ANN Based Differential Relay for Transformer Protection with Security against CT Saturation and Tap Changer Operation

    OpenAIRE

    KHORASHADI-ZADEH, Hassan; LI, Zuyi

    2014-01-01

    This paper presents an artificial neural network (ANN) based scheme for fault identification in power transformer protection. The proposed scheme is featured by the application of ANN to identifying system patterns, the unique choice of harmonics of positive sequence differential currents as ANN inputs, the effective handling of current transformer (CT) saturation with an ANN based approach, and the consideration of tap changer position for correcting secondary CT current. Performanc...

  4. ANN based optimization of a solar assisted hybrid cooling system in Turkey

    Energy Technology Data Exchange (ETDEWEB)

    Ozgur, Arif; Yetik, Ozge; Arslan, Oguz [Mechanical Eng. Dept., Engineering Faculty, Dumlupinar University (Turkey)], email: maozgur@dpu.edu.tr, email: ozgeyetik@dpu.edu.tr, email: oarslan@dpu.edu.tr

    2011-07-01

    This study achieved optimization of a solar assisted hybrid cooling system with refrigerants such as R717, R141b, R134a and R123 using an artificial neural network (ANN) model based on average total solar radiation, ambient temperature, generator temperature, condenser temperature, intercooler temperature and fluid types. ANN is a new tool; it works rapidly and can thus be a solution for design and optimization of complex power cycles. A unique flexible ANN algorithm was introduced to evaluate the solar ejector cooling systems because of the nonlinearity of neural networks. The conclusion was that the best COPs value obtained with the ANN is 1.35 and COPc is 3.03 when the average total solar radiation, ambient temperature, generator temperature, condenser temperature, intercooler temperature and algorithm are respectively 674.72 W/m2, 17.9, 80, 15 and 13 degree celsius and LM with 14 neurons in single hidden layer, for R717.

  5. Quantitative reconstruction of the last interglacial vegetation and climate based on the pollen record from Lake Baikal, Russia

    Energy Technology Data Exchange (ETDEWEB)

    Tarasov, P. [Free University, Institute of Geological Sciences, Palaeontology Department, Berlin (Germany); Granoszewski, W. [Polish Geological Institute, Carpathian Branch, Krakow (Poland); Bezrukova, E.; Abzaeva, A. [Siberian Branch Russian Academy of Sciences, Institute of Geochemistry, Irkutsk (Russian Federation); Brewer, S. [CEREGE CNRS/University P. Cezanne, UMR 6635, BP80, Aix-en-Provence (France); Nita, M. [University of Silesia, Faculty of Earth Sciences, Sosnowiec (Poland); Oberhaensli, H. [GeoForschungsZentrum, Potsdam (Germany)

    2005-11-01

    Changes in mean temperature of the coldest (T{sub c}) and warmest month (T{sub w}), annual precipitation (P{sub ann}) and moisture index ({alpha}) were reconstructed from a continuous pollen record from Lake Baikal, Russia. The pollen sequence CON01-603-2 (53 57'N, 108 54'E) was recovered from a 386 m water depth in the Continent Ridge and dated to ca. 130-114.8 ky BP. This time interval covers the complete last interglacial (LI), corresponding to MIS 5e. Results of pollen analysis and pollen-based quantitative biome reconstruction show pronounced changes in the regional vegetation throughout the record. Shrubby tundra covered the area at the beginning of MIS 5e (ca. 130-128 ky), consistent with the end of the Middle Pleistocene glaciation. The late glacial climate was characterised by low winter and summer temperatures (T{sub c}{proportional_to} -38 to -35 C and T{sub w}{proportional_to}11-13 C) and low annual precipitation (P{sub ann}{proportional_to}300 mm). However, the wide spread of tundra vegetation suggests rather moist environments associated with low temperatures and evaporation (reconstructed {alpha}{proportional_to}1). Tundra was replaced by boreal conifer forest (taiga) by ca. 128 ky BP, suggesting a transition to the interglacial. Taiga-dominant phase lasted until ca. 117.4 ky BP, e.g. about 10 ky. The most favourable climate conditions occurred during the first half of the LI. P{sub ann} reached 500 mm soon after 128 ky BP. However, temperature changed more gradually. Maximum values of T{sub c}{proportional_to} -20 C and T{sub w}{proportional_to}16-17 C are reconstructed from about 126 ky BP. Conditions became gradually colder after ca. 121 ky BP. T{sub c} dropped to {proportional_to} -27 C and T{sub w} to {proportional_to}15 C by 119.5 ky BP. The reconstructed increase in continentality was accompanied by a decrease in P{sub ann} to {proportional_to}400-420 mm. However, the climate was still humid enough ({alpha}{proportional_to}0.9) to

  6. arXiv Energy Reconstruction of Hadrons in highly granular combined ECAL and HCAL systems

    CERN Document Server

    Israeli, Yasmine

    2018-05-03

    This paper discusses the hadronic energy reconstruction of two combined electromagnetic and hadronic calorimeter systems using physics prototypes of the CALICE collaboration: the silicon-tungsten electromagnetic calorimeter (Si-W ECAL) and the scintillator-SiPM based analog hadron calorimeter (AHCAL); and the scintillator-tungsten electromagnetic calorimeter (ScECAL) and the AHCAL. These systems were operated in hadron beams at CERN and FNAL, permitting the study of the performance in combined ECAL and HCAL systems. Two techniques for the energy reconstruction are used, a standard reconstruction based on calibrated sub-detector energy sums, and one based on a software compensation algorithm making use of the local energy density information provided by the high granularity of the detectors. The software compensation-based algorithm improves the hadronic energy resolution by up to 30% compared to the standard reconstruction. The combined system data show comparable energy resolutions to the one achieved for da...

  7. Anne-Ly Võlli: Iga inimene ja asutus vajab omamoodi lähenemist / Anne-Ly Võlli ; intervjueerinud Jaanika Kressa

    Index Scriptorium Estoniae

    Võlli, Anne-Ly, 1976-

    2009-01-01

    MTÜ Jõgevamaa Omavalitsuste Aktiviseerimiskeskus kinnitas avaliku konkursi tulemusel juhatuse liikmeks Anne-Ly Võlli, kelle ülesandeks on keskuse tegevuse juhtimine ja koostöö arendamine partneromavalitsuste ja teiste koostööpartnerite vahel

  8. Simulation model of ANN based maximum power point tracking controller for solar PV system

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)

    2011-02-15

    In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)

  9. Application of ann-based decision making pattern recognition to fishing operations

    Energy Technology Data Exchange (ETDEWEB)

    Akhlaghinia, M.; Torabi, F.; Wilton, R.R. [University of Regina, Saskatchewan (Canada). Faculty of Engineering. Dept. of Petroleum Engineering], e-mail: Farshid.Torabi@uregina.ca

    2010-10-15

    Decision making is a crucial part of fishing operations. Proper decisions should be made to prevent wasted time and associated costs on unsuccessful operations. This paper presents a novel model to help drilling managers decide when to commence and when to quit a fishing operation. A decision making model based on Artificial Neural Network (ANN) has been developed that utilizes Pattern Recognition based on 181 fishing incidents from one of the most fish-prone fields of the southwest of Iran. All parameters chosen to train the ANN-Based Pattern Recognition Tool are assumed to play a role in the success of the fishing operation and are therefore used to decide whether a fishing operation should be performed or not. If the tool deems the operation suitable for consideration, a cost analysis of the fishing operation can then be performed to justify its overall cost. (author)

  10. Three-dimensional atomic-image reconstruction from a single-energy Si(100) photoelectron hologram

    International Nuclear Information System (INIS)

    Matsushita, T.; Agui, A.; Yoshigoe, A.

    2004-01-01

    Full text: J. J. Barton proposed a basic algorithm for three-dimensional atomic-image reconstruction from photoelectron hologram, which is based on the Fourier transform(FT). In the use of a single-energy hologram, the twin-image appears in principle. The twin image disappears in the use of multi-energy hologram, which requires longer measuring time and variable-energy light source. But the reconstruction in the use of a simple FT is difficult because the scattered electron wave is not s-symmetric wave. Many theoretical and experimental approaches based on the FT have been researched. We propose a new algorithm so-called 'scattering pattern matrix', which is not based on the FT. The algorithm utilizes the 'scattering pattern', and iterative gradient method. Real space image can be reconstructed from a single-energy hologram without initial model. In addition, the twin image disappears. We reconstructed the three-dimensional atomic image of Si bulk structure from an experimental single-energy hologram of Si(100) 2s emission, which is shown The experiment was performed with using a Al-K α light source. The experimental setup is shown in. Then we calculated a vertical slice image of the reconstructed Si bulk structure, which is shown. The atomic images appear around the expected positions

  11. SU-E-I-41: Dictionary Learning Based Quantitative Reconstruction for Low-Dose Dual-Energy CT (DECT)

    International Nuclear Information System (INIS)

    Xu, Q; Xing, L; Xiong, G; Elmore, K; Min, J

    2015-01-01

    Purpose: DECT collects two sets of projection data under higher and lower energies. With appropriates composition methods on linear attenuation coefficients, quantitative information about the object, such as density, can be obtained. In reality, one of the important problems in DECT is the radiation dose due to doubled scans. This work is aimed at establishing a dictionary learning based reconstruction framework for DECT for improved image quality while reducing the imaging dose. Methods: In our method, two dictionaries were learned respectively from the high-energy and lowenergy image datasets of similar objects under normal dose in advance. The linear attenuation coefficient was decomposed into two basis components with material based composition method. An iterative reconstruction framework was employed. Two basis components were alternately updated with DECT datasets and dictionary learning based sparse constraints. After one updating step under the dataset fidelity constraints, both high-energy and low-energy images can be obtained from the two basis components. Sparse constraints based on the learned dictionaries were applied to the high- and low-energy images to update the two basis components. The iterative calculation continues until a pre-set number of iteration was reached. Results: We evaluated the proposed dictionary learning method with dual energy images collected using a DECT scanner. We re-projected the projection data with added Poisson noise to reflect the low-dose situation. The results obtained by the proposed method were compared with that obtained using FBP based method and TV based method. It was found that the proposed approach yield better results than other methods with higher resolution and less noise. Conclusion: The use of dictionary learned from DECT images under normal dose is valuable and leads to improved results with much lower imaging dose

  12. SU-E-I-41: Dictionary Learning Based Quantitative Reconstruction for Low-Dose Dual-Energy CT (DECT)

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Q [School of Electronic and Information Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi 710049 (China); Department of Radiation Oncology, Stanford University, Stanford, CA 94305 (United States); Xing, L [Department of Radiation Oncology, Stanford University, Stanford, CA 94305 (United States); Xiong, G; Elmore, K; Min, J [Dalio Institute of Cardiovascular Imaging, New York-Presbyterian Hospital and Weill Cornell Medical College, New York, NY (United States)

    2015-06-15

    Purpose: DECT collects two sets of projection data under higher and lower energies. With appropriates composition methods on linear attenuation coefficients, quantitative information about the object, such as density, can be obtained. In reality, one of the important problems in DECT is the radiation dose due to doubled scans. This work is aimed at establishing a dictionary learning based reconstruction framework for DECT for improved image quality while reducing the imaging dose. Methods: In our method, two dictionaries were learned respectively from the high-energy and lowenergy image datasets of similar objects under normal dose in advance. The linear attenuation coefficient was decomposed into two basis components with material based composition method. An iterative reconstruction framework was employed. Two basis components were alternately updated with DECT datasets and dictionary learning based sparse constraints. After one updating step under the dataset fidelity constraints, both high-energy and low-energy images can be obtained from the two basis components. Sparse constraints based on the learned dictionaries were applied to the high- and low-energy images to update the two basis components. The iterative calculation continues until a pre-set number of iteration was reached. Results: We evaluated the proposed dictionary learning method with dual energy images collected using a DECT scanner. We re-projected the projection data with added Poisson noise to reflect the low-dose situation. The results obtained by the proposed method were compared with that obtained using FBP based method and TV based method. It was found that the proposed approach yield better results than other methods with higher resolution and less noise. Conclusion: The use of dictionary learned from DECT images under normal dose is valuable and leads to improved results with much lower imaging dose.

  13. Playing tag with ANN: boosted top identification with pattern recognition

    International Nuclear Information System (INIS)

    Almeida, Leandro G.; Backović, Mihailo; Cliche, Mathieu; Lee, Seung J.; Perelstein, Maxim

    2015-01-01

    Many searches for physics beyond the Standard Model at the Large Hadron Collider (LHC) rely on top tagging algorithms, which discriminate between boosted hadronic top quarks and the much more common jets initiated by light quarks and gluons. We note that the hadronic calorimeter (HCAL) effectively takes a “digital image" of each jet, with pixel intensities given by energy deposits in individual HCAL cells. Viewed in this way, top tagging becomes a canonical pattern recognition problem. With this motivation, we present a novel top tagging algorithm based on an Artificial Neural Network (ANN), one of the most popular approaches to pattern recognition. The ANN is trained on a large sample of boosted tops and light quark/gluon jets, and is then applied to independent test samples. The ANN tagger demonstrated excellent performance in a Monte Carlo study: for example, for jets with p T in the 1100–1200 GeV range, 60% top-tag efficiency can be achieved with a 4% mis-tag rate. We discuss the physical features of the jets identified by the ANN tagger as the most important for classification, as well as correlations between the ANN tagger and some of the familiar top-tagging observables and algorithms.

  14. Playing tag with ANN: boosted top identification with pattern recognition

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Leandro G. [Institut de Biologie de l’École Normale Supérieure (IBENS), Inserm 1024- CNRS 8197,46 rue d’Ulm, 75005 Paris (France); Backović, Mihailo [Center for Cosmology, Particle Physics and Phenomenology - CP3,Universite Catholique de Louvain,Louvain-la-neuve (Belgium); Cliche, Mathieu [Laboratory for Elementary Particle Physics, Cornell University,Ithaca, NY 14853 (United States); Lee, Seung J. [Department of Physics, Korea Advanced Institute of Science and Technology,335 Gwahak-ro, Yuseong-gu, Daejeon 305-701 (Korea, Republic of); School of Physics, Korea Institute for Advanced Study,Seoul 130-722 (Korea, Republic of); Perelstein, Maxim [Laboratory for Elementary Particle Physics, Cornell University,Ithaca, NY 14853 (United States)

    2015-07-17

    Many searches for physics beyond the Standard Model at the Large Hadron Collider (LHC) rely on top tagging algorithms, which discriminate between boosted hadronic top quarks and the much more common jets initiated by light quarks and gluons. We note that the hadronic calorimeter (HCAL) effectively takes a “digital image' of each jet, with pixel intensities given by energy deposits in individual HCAL cells. Viewed in this way, top tagging becomes a canonical pattern recognition problem. With this motivation, we present a novel top tagging algorithm based on an Artificial Neural Network (ANN), one of the most popular approaches to pattern recognition. The ANN is trained on a large sample of boosted tops and light quark/gluon jets, and is then applied to independent test samples. The ANN tagger demonstrated excellent performance in a Monte Carlo study: for example, for jets with p{sub T} in the 1100–1200 GeV range, 60% top-tag efficiency can be achieved with a 4% mis-tag rate. We discuss the physical features of the jets identified by the ANN tagger as the most important for classification, as well as correlations between the ANN tagger and some of the familiar top-tagging observables and algorithms.

  15. A Fault Diagnosis Model Based on LCD-SVD-ANN-MIV and VPMCD for Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Songrong Luo

    2016-01-01

    Full Text Available The fault diagnosis process is essentially a class discrimination problem. However, traditional class discrimination methods such as SVM and ANN fail to capitalize the interactions among the feature variables. Variable predictive model-based class discrimination (VPMCD can adequately use the interactions. But the feature extraction and selection will greatly affect the accuracy and stability of VPMCD classifier. Aiming at the nonstationary characteristics of vibration signal from rotating machinery with local fault, singular value decomposition (SVD technique based local characteristic-scale decomposition (LCD was developed to extract the feature variables. Subsequently, combining artificial neural net (ANN and mean impact value (MIV, ANN-MIV as a kind of feature selection approach was proposed to select more suitable feature variables as input vector of VPMCD classifier. In the end of this paper, a novel fault diagnosis model based on LCD-SVD-ANN-MIV and VPMCD is proposed and proved by an experimental application for roller bearing fault diagnosis. The results show that the proposed method is effective and noise tolerant. And the comparative results demonstrate that the proposed method is superior to the other methods in diagnosis speed, diagnosis success rate, and diagnosis stability.

  16. Artificial neural network approach to predict surgical site infection after free-flap reconstruction in patients receiving surgery for head and neck cancer.

    Science.gov (United States)

    Kuo, Pao-Jen; Wu, Shao-Chun; Chien, Peng-Chen; Chang, Shu-Shya; Rau, Cheng-Shyuan; Tai, Hsueh-Ling; Peng, Shu-Hui; Lin, Yi-Chun; Chen, Yi-Chun; Hsieh, Hsiao-Yun; Hsieh, Ching-Hua

    2018-03-02

    The aim of this study was to develop an effective surgical site infection (SSI) prediction model in patients receiving free-flap reconstruction after surgery for head and neck cancer using artificial neural network (ANN), and to compare its predictive power with that of conventional logistic regression (LR). There were 1,836 patients with 1,854 free-flap reconstructions and 438 postoperative SSIs in the dataset for analysis. They were randomly assigned tin ratio of 7:3 into a training set and a test set. Based on comprehensive characteristics of patients and diseases in the absence or presence of operative data, prediction of SSI was performed at two time points (pre-operatively and post-operatively) with a feed-forward ANN and the LR models. In addition to the calculated accuracy, sensitivity, and specificity, the predictive performance of ANN and LR were assessed based on area under the curve (AUC) measures of receiver operator characteristic curves and Brier score. ANN had a significantly higher AUC (0.892) of post-operative prediction and AUC (0.808) of pre-operative prediction than LR (both P <0.0001). In addition, there was significant higher AUC of post-operative prediction than pre-operative prediction by ANN (p<0.0001). With the highest AUC and the lowest Brier score (0.090), the post-operative prediction by ANN had the highest overall predictive performance. The post-operative prediction by ANN had the highest overall performance in predicting SSI after free-flap reconstruction in patients receiving surgery for head and neck cancer.

  17. A study of using smartphone to detect and identify construction workers' near-miss falls based on ANN

    Science.gov (United States)

    Zhang, Mingyuan; Cao, Tianzhuo; Zhao, Xuefeng

    2018-03-01

    As an effective fall accident preventive method, insight into near-miss falls provides an efficient solution to find out the causes of fall accidents, classify the type of near-miss falls and control the potential hazards. In this context, the paper proposes a method to detect and identify near-miss falls that occur when a worker walks in a workplace based on artificial neural network (ANN). The energy variation generated by workers who meet with near-miss falls is measured by sensors embedded in smart phone. Two experiments were designed to train the algorithm to identify various types of near-miss falls and test the recognition accuracy, respectively. At last, a test was conducted by workers wearing smart phones as they walked around a simulated construction workplace. The motion data was collected, processed and inputted to the trained ANN to detect and identify near-miss falls. Thresholds were obtained to measure the relationship between near-miss falls and fall accidents in a quantitate way. This approach, which integrates smart phone and ANN, will help detect near-miss fall events, identify hazardous elements and vulnerable workers, providing opportunities to eliminate dangerous conditions in a construction site or to alert possible victims that need to change their behavior before the occurrence of a fall accident.

  18. Tensor-based Dictionary Learning for Spectral CT Reconstruction

    Science.gov (United States)

    Zhang, Yanbo; Wang, Ge

    2016-01-01

    Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628

  19. Tensor-Based Dictionary Learning for Spectral CT Reconstruction.

    Science.gov (United States)

    Zhang, Yanbo; Mou, Xuanqin; Wang, Ge; Yu, Hengyong

    2017-01-01

    Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods.

  20. Dendroclimatic transfer functions revisited: Little Ice Age and Medieval Warm Period summer temperatures reconstructed using artificial neural networks and linear algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Helama, S.; Holopainen, J.; Eronen, M. [Department of Geology, University of Helsinki, (Finland); Makarenko, N.G. [Russian Academy of Sciences, St. Petersburg (Russian Federation). Pulkovo Astronomical Observatory; Karimova, L.M.; Kruglun, O.A. [Institute of Mathematics, Almaty (Kazakhstan); Timonen, M. [Finnish Forest Research Institute, Rovaniemi Research Unit (Finland); Merilaeinen, J. [SAIMA Unit of the Savonlinna Department of Teacher Education, University of Joensuu (Finland)

    2009-07-01

    Tree-rings tell of past climates. To do so, tree-ring chronologies comprising numerous climate-sensitive living-tree and subfossil time-series need to be 'transferred' into palaeoclimate estimates using transfer functions. The purpose of this study is to compare different types of transfer functions, especially linear and nonlinear algorithms. Accordingly, multiple linear regression (MLR), linear scaling (LSC) and artificial neural networks (ANN, nonlinear algorithm) were compared. Transfer functions were built using a regional tree-ring chronology and instrumental temperature observations from Lapland (northern Finland and Sweden). In addition, conventional MLR was compared with a hybrid model whereby climate was reconstructed separately for short- and long-period timescales prior to combining the bands of timescales into a single hybrid model. The fidelity of the different reconstructions was validated against instrumental climate data. The reconstructions by MLR and ANN showed reliable reconstruction capabilities over the instrumental period (AD 1802-1998). LCS failed to reach reasonable verification statistics and did not qualify as a reliable reconstruction: this was due mainly to exaggeration of the low-frequency climatic variance. Over this instrumental period, the reconstructed low-frequency amplitudes of climate variability were rather similar by MLR and ANN. Notably greater differences between the models were found over the actual reconstruction period (AD 802-1801). A marked temperature decline, as reconstructed by MLR, from the Medieval Warm Period (AD 931-1180) to the Little Ice Age (AD 1601-1850), was evident in all the models. This decline was approx. 0.5 C as reconstructed by MLR. Different ANN based palaeotemperatures showed simultaneous cooling of 0.2 to 0.5 C, depending on algorithm. The hybrid MLR did not seem to provide further benefit above conventional MLR in our sample. The robustness of the conventional MLR over the calibration

  1. Prediction of net energy consumption based on economic indicators (GNP and GDP) in Turkey

    International Nuclear Information System (INIS)

    Soezen, Adnan; Arcaklioglu, Erol

    2007-01-01

    The most important theme in this study is to obtain equations based on economic indicators (gross national product-GNP and gross domestic product-GDP) and population increase to predict the net energy consumption of Turkey using artificial neural networks (ANNs) in order to determine future level of the energy consumption and make correct investments in Turkey. In this study, three different models were used in order to train the ANN. In one of them (Model 1), energy indicators such as installed capacity, generation, energy import and energy export, in second (Model 2), GNP was used and in the third (Model 3), GDP was used as the input layer of the network. The net energy consumption (NEC) is in the output layer for all models. In order to train the neural network, economic and energy data for last 37 years (1968-2005) are used in network for all models. The aim of used different models is to demonstrate the effect of economic indicators on the estimation of NEC. The maximum mean absolute percentage error (MAPE) was found to be 2.322732, 1.110525 and 1.122048 for Models 1, 2 and 3, respectively. R 2 values were obtained as 0.999444, 0.999903 and 0.999903 for training data of Models 1, 2 and 3, respectively. The ANN approach shows greater accuracy for evaluating NEC based on economic indicators. Based on the outputs of the study, the ANN model can be used to estimate the NEC from the country's population and economic indicators with high confidence for planing future projections

  2. iAnn

    DEFF Research Database (Denmark)

    Jimenez, Rafael C; Albar, Juan P; Bhak, Jong

    2013-01-01

    We present iAnn, an open source community-driven platform for dissemination of life science events, such as courses, conferences and workshops. iAnn allows automatic visualisation and integration of customised event reports. A central repository lies at the core of the platform: curators add...... submitted events, and these are subsequently accessed via web services. Thus, once an iAnn widget is incorporated into a website, it permanently shows timely relevant information as if it were native to the remote site. At the same time, announcements submitted to the repository are automatically...

  3. A simulated-based neural network algorithm for forecasting electrical energy consumption in Iran

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Sohrabkhani, S.

    2008-01-01

    This study presents an integrated algorithm for forecasting monthly electrical energy consumption based on artificial neural network (ANN), computer simulation and design of experiments using stochastic procedures. First, an ANN approach is illustrated based on supervised multi-layer perceptron (MLP) network for the electrical consumption forecasting. The chosen model, therefore, can be compared to that of estimated by time series model. Computer simulation is developed to generate random variables for monthly electricity consumption. This is achieved to foresee the effects of probabilistic distribution on monthly electricity consumption. The simulated-based ANN model is then developed. Therefore, there are four treatments to be considered in analysis of variance (ANOVA), which are actual data, time series, ANN and simulated-based ANN. Furthermore, ANOVA is used to test the null hypothesis of the above four alternatives being statistically equal. If the null hypothesis is accepted, then the lowest mean absolute percentage error (MAPE) value is used to select the best model, otherwise the Duncan method (DMRT) of paired comparison is used to select the optimum model which could be time series, ANN or simulated-based ANN. In case of ties the lowest MAPE value is considered as the benchmark. The integrated algorithm has several unique features. First, it is flexible and identifies the best model based on the results of ANOVA and MAPE, whereas previous studies consider the best fitted ANN model based on MAPE or relative error results. Second, the proposed algorithm may identify conventional time series as the best model for future electricity consumption forecasting because of its dynamic structure, whereas previous studies assume that ANN always provide the best solutions and estimation. To show the applicability and superiority of the proposed algorithm, the monthly electricity consumption in Iran from March 1994 to February 2005 (131 months) is used and applied to

  4. LFC based adaptive PID controller using ANN and ANFIS techniques

    Directory of Open Access Journals (Sweden)

    Mohamed I. Mosaad

    2014-12-01

    Full Text Available This paper presents an adaptive PID Load Frequency Control (LFC for power systems using Neuro-Fuzzy Inference Systems (ANFIS and Artificial Neural Networks (ANN oriented by Genetic Algorithm (GA. PID controller parameters are tuned off-line by using GA to minimize integral error square over a wide-range of load variations. The values of PID controller parameters obtained from GA are used to train both ANFIS and ANN. Therefore, the two proposed techniques could, online, tune the PID controller parameters for optimal response at any other load point within the operating range. Testing of the developed techniques shows that the adaptive PID-LFC could preserve optimal performance over the whole loading range. Results signify superiority of ANFIS over ANN in terms of performance measures.

  5. Anne-Ly Reimaa : "Suhtlemisel on oluline avatus" / Anne-Ly Reimaa ; interv. Tiia Linnard

    Index Scriptorium Estoniae

    Reimaa, Anne-Ly

    2005-01-01

    Ilmunud ka: Severnoje Poberezhje : Subbota 3. september lk. 5. Intervjueeritav oma tööst Brüsselis, kus esindab Eesti linnade liitu ja Eesti maaomavalitsuste liitu. Arvamust avaldavad Anne Jundas ja Kaia Kaldvee. Lisa: CV

  6. THE FEMINISM AND FEMININITY OF ANN VERONICA IN H. G. WELLS' ANN VERONICA

    Directory of Open Access Journals (Sweden)

    Liem Satya Limanta

    2002-01-01

    Full Text Available H.G. Well's Ann Veronica structurally seems to be divided into two parts; the first deals with Ann Veronica's struggle to get equality with men and freedom in most aspects of life, such as in politics, economics, education, and sexuality; the second describes much the other side of her individuality which she cannot deny, namely her femininity, such as her crave for love, marriage, maternity, and beauty. H.G. Wells describes vividly the two elements in Ann Veronica, feminism and femininity. As a feminist, Ann Veronica rebelled against her authoritative Victorian father, who regarded women only as men's property to be protected from the harsh world outside. On the other side, Ann could not deny her being a woman after she fell in love with Capes. Her femininity from the second half of the novel then is explored. Although the novel ends with the depiction of the domestic life of Ann Veronica, it does not mean that the feminism is gone altogether. The key point is that the family life Ann chooses as a `submissive' wife and good mother is her choice. It is very different if it is forced on her to do. Thus, this novel depicts both sides of Ann Veronica, her feminism and her femininity.

  7. Comparison Between Wind Power Prediction Models Based on Wavelet Decomposition with Least-Squares Support Vector Machine (LS-SVM and Artificial Neural Network (ANN

    Directory of Open Access Journals (Sweden)

    Maria Grazia De Giorgi

    2014-08-01

    Full Text Available A high penetration of wind energy into the electricity market requires a parallel development of efficient wind power forecasting models. Different hybrid forecasting methods were applied to wind power prediction, using historical data and numerical weather predictions (NWP. A comparative study was carried out for the prediction of the power production of a wind farm located in complex terrain. The performances of Least-Squares Support Vector Machine (LS-SVM with Wavelet Decomposition (WD were evaluated at different time horizons and compared to hybrid Artificial Neural Network (ANN-based methods. It is acknowledged that hybrid methods based on LS-SVM with WD mostly outperform other methods. A decomposition of the commonly known root mean square error was beneficial for a better understanding of the origin of the differences between prediction and measurement and to compare the accuracy of the different models. A sensitivity analysis was also carried out in order to underline the impact that each input had in the network training process for ANN. In the case of ANN with the WD technique, the sensitivity analysis was repeated on each component obtained by the decomposition.

  8. Tunka-Rex: energy reconstruction with a single antenna station

    Science.gov (United States)

    Hiller, R.; Bezyazeekov, P. A.; Budnev, N. M.; Fedorov, O.; Gress, O. A.; Haungs, A.; Huege, T.; Kazarina, Y.; Kleifges, M.; Korosteleva, E. E.; Kostunin, D.; Krömer, O.; Kungel, V.; Kuzmichev, L. A.; Lubsandorzhiev, N.; Mirgazov, R. R.; Monkhoev, R.; Osipova, E. A.; Pakhorukov, A.; Pankov, L.; Prosin, V. V.; Rubtsov, G. I.; Schröder, F. G.; Wischnewski, R.; Zagorodnikov, A.

    2017-03-01

    The Tunka-Radio extension (Tunka-Rex) is a radio detector for air showers in Siberia. From 2012 to 2014, Tunka-Rex operated exclusively together with its host experiment, the air-Cherenkov array Tunka-133, which provided trigger, data acquisition, and an independent air-shower reconstruction. It was shown that the air-shower energy can be reconstructed by Tunka-Rex with a precision of 15% for events with signal in at least 3 antennas, using the radio amplitude at a distance of 120 m from the shower axis as an energy estimator. Using the reconstruction from the host experiment Tunka-133 for the air-shower geometry (shower core and direction), the energy estimator can in principle already be obtained with measurements from a single antenna, close to the reference distance. We present a method for event selection and energy reconstruction, requiring only one antenna, and achieving a precision of about 20%. This method increases the effective detector area and lowers thresholds for zenith angle and energy, resulting in three times more events than in the standard reconstruction.

  9. Dynamically stable associative learning: a neurobiologically based ANN and its applications

    Science.gov (United States)

    Vogl, Thomas P.; Blackwell, Kim L.; Barbour, Garth; Alkon, Daniel L.

    1992-07-01

    Most currently popular artificial neural networks (ANN) are based on conceptions of neuronal properties that date back to the 1940s and 50s, i.e., to the ideas of McCullough, Pitts, and Hebb. Dystal is an ANN based on current knowledge of neurobiology at the cellular and subcellular level. Networks based on these neurobiological insights exhibit the following advantageous properties: (1) A theoretical storage capacity of bN non-orthogonal memories, where N is the number of output neurons sharing common inputs and b is the number of distinguishable (gray shade) levels. (2) The ability to learn, store, and recall associations among noisy, arbitrary patterns. (3) A local synaptic learning rule (learning depends neither on the output of the post-synaptic neuron nor on a global error term), some of whose consequences are: (4) Feed-forward, lateral, and feed-back connections (as well as time-sensitive connections) are possible without alteration of the learning algorithm; (5) Storage allocation (patch creation) proceeds dynamically as associations are learned (self- organizing); (6) The number of training set presentations required for learning is small (different expressions and/or corrupted by noise, and on reading hand-written digits (98% accuracy) and hand-printed Japanese Kanji (90% accuracy) is demonstrated.

  10. Evaluation of Effectiveness of Wavelet Based Denoising Schemes Using ANN and SVM for Bearing Condition Classification

    Directory of Open Access Journals (Sweden)

    Vijay G. S.

    2012-01-01

    Full Text Available The wavelet based denoising has proven its ability to denoise the bearing vibration signals by improving the signal-to-noise ratio (SNR and reducing the root-mean-square error (RMSE. In this paper seven wavelet based denoising schemes have been evaluated based on the performance of the Artificial Neural Network (ANN and the Support Vector Machine (SVM, for the bearing condition classification. The work consists of two parts, the first part in which a synthetic signal simulating the defective bearing vibration signal with Gaussian noise was subjected to these denoising schemes. The best scheme based on the SNR and the RMSE was identified. In the second part, the vibration signals collected from a customized Rolling Element Bearing (REB test rig for four bearing conditions were subjected to these denoising schemes. Several time and frequency domain features were extracted from the denoised signals, out of which a few sensitive features were selected using the Fisher’s Criterion (FC. Extracted features were used to train and test the ANN and the SVM. The best denoising scheme identified, based on the classification performances of the ANN and the SVM, was found to be the same as the one obtained using the synthetic signal.

  11. The parallel implementation of a backpropagation neural network and its applicability to SPECT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kerr, John Patrick [Iowa State Univ., Ames, IA (United States)

    1992-01-01

    The objective of this study was to determine the feasibility of using an Artificial Neural Network (ANN), in particular a backpropagation ANN, to improve the speed and quality of the reconstruction of three-dimensional SPECT (single photon emission computed tomography) images. In addition, since the processing elements (PE)s in each layer of an ANN are independent of each other, the speed and efficiency of the neural network architecture could be better optimized by implementing the ANN on a massively parallel computer. The specific goals of this research were: to implement a fully interconnected backpropagation neural network on a serial computer and a SIMD parallel computer, to identify any reduction in the time required to train these networks on the parallel machine versus the serial machine, to determine if these neural networks can learn to recognize SPECT data by training them on a section of an actual SPECT image, and to determine from the knowledge obtained in this research if full SPECT image reconstruction by an ANN implemented on a parallel computer is feasible both in time required to train the network, and in quality of the images reconstructed.

  12. Anne Frank relaunched in the world of comics and graphic novels

    NARCIS (Netherlands)

    Ribbens, Kees

    2017-01-01

    Recently the Basel-based Anne Frank Fonds proudly presented the Graphic Diary of Anne Frank. The impression is created as if this is the first ever comic book version of Anne Frank’s narrative. This article shows that there were various predecessors.

  13. Estimating SPT-N Value Based on Soil Resistivity using Hybrid ANN-PSO Algorithm

    Science.gov (United States)

    Nur Asmawisham Alel, Mohd; Ruben Anak Upom, Mark; Asnida Abdullah, Rini; Hazreek Zainal Abidin, Mohd

    2018-04-01

    Standard Penetration Resistance (N value) is used in many empirical geotechnical engineering formulas. Meanwhile, soil resistivity is a measure of soil’s resistance to electrical flow. For a particular site, usually, only a limited N value data are available. In contrast, resistivity data can be obtained extensively. Moreover, previous studies showed evidence of a correlation between N value and resistivity value. Yet, no existing method is able to interpret resistivity data for estimation of N value. Thus, the aim is to develop a method for estimating N-value using resistivity data. This study proposes a hybrid Artificial Neural Network-Particle Swarm Optimization (ANN-PSO) method to estimate N value using resistivity data. Five different ANN-PSO models based on five boreholes were developed and analyzed. The performance metrics used were the coefficient of determination, R2 and mean absolute error, MAE. Analysis of result found that this method can estimate N value (R2 best=0.85 and MAEbest=0.54) given that the constraint, Δ {\\bar{l}}ref, is satisfied. The results suggest that ANN-PSO method can be used to estimate N value with good accuracy.

  14. Nonparametric reconstruction of the dark energy equation of state

    Energy Technology Data Exchange (ETDEWEB)

    Heitmann, Katrin [Los Alamos National Laboratory; Holsclaw, Tracy [Los Alamos National Laboratory; Alam, Ujjaini [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Higdon, David [Los Alamos National Laboratory; Sanso, Bruno [UC SANTA CRUZ; Lee, Herbie [UC SANTA CRUZ

    2009-01-01

    The major aim of ongoing and upcoming cosmological surveys is to unravel the nature of dark energy. In the absence of a compelling theory to test, a natural approach is to first attempt to characterize the nature of dark energy in detail, the hope being that this will lead to clues about the underlying fundamental theory. A major target in this characterization is the determination of the dynamical properties of the dark energy equation of state w. The discovery of a time variation in w(z) could then lead to insights about the dynamical origin of dark energy. This approach requires a robust and bias-free method for reconstructing w(z) from data, which does not rely on restrictive expansion schemes or assumed functional forms for w(z). We present a new non parametric reconstruction method for the dark energy equation of state based on Gaussian Process models. This method reliably captures nontrivial behavior of w(z) and provides controlled error bounds. We demollstrate the power of the method on different sets of simulated supernova data. The GP model approach is very easily extended to include diverse cosmological probes.

  15. Comparing SVM and ANN based Machine Learning Methods for Species Identification of Food Contaminating Beetles.

    Science.gov (United States)

    Bisgin, Halil; Bera, Tanmay; Ding, Hongjian; Semey, Howard G; Wu, Leihong; Liu, Zhichao; Barnes, Amy E; Langley, Darryl A; Pava-Ripoll, Monica; Vyas, Himansu J; Tong, Weida; Xu, Joshua

    2018-04-25

    Insect pests, such as pantry beetles, are often associated with food contaminations and public health risks. Machine learning has the potential to provide a more accurate and efficient solution in detecting their presence in food products, which is currently done manually. In our previous research, we demonstrated such feasibility where Artificial Neural Network (ANN) based pattern recognition techniques could be implemented for species identification in the context of food safety. In this study, we present a Support Vector Machine (SVM) model which improved the average accuracy up to 85%. Contrary to this, the ANN method yielded ~80% accuracy after extensive parameter optimization. Both methods showed excellent genus level identification, but SVM showed slightly better accuracy  for most species. Highly accurate species level identification remains a challenge, especially in distinguishing between species from the same genus which may require improvements in both imaging and machine learning techniques. In summary, our work does illustrate a new SVM based technique and provides a good comparison with the ANN model in our context. We believe such insights will pave better way forward for the application of machine learning towards species identification and food safety.

  16. EEG-Based Computer Aided Diagnosis of Autism Spectrum Disorder Using Wavelet, Entropy, and ANN

    Directory of Open Access Journals (Sweden)

    Ridha Djemal

    2017-01-01

    Full Text Available Autism spectrum disorder (ASD is a type of neurodevelopmental disorder with core impairments in the social relationships, communication, imagination, or flexibility of thought and restricted repertoire of activity and interest. In this work, a new computer aided diagnosis (CAD of autism ‎based on electroencephalography (EEG signal analysis is investigated. The proposed method is based on discrete wavelet transform (DWT, entropy (En, and artificial neural network (ANN. DWT is used to decompose EEG signals into approximation and details coefficients to obtain EEG subbands. The feature vector is constructed by computing Shannon entropy values from each EEG subband. ANN classifies the corresponding EEG signal into normal or autistic based on the extracted features. The experimental results show the effectiveness of the proposed method for assisting autism diagnosis. A receiver operating characteristic (ROC curve metric is used to quantify the performance of the proposed method. The proposed method obtained promising results tested using real dataset provided by King Abdulaziz Hospital, Jeddah, Saudi Arabia.

  17. Hydrology and sediment budget of Los Laureles Canyon, Tijuana, MX: Modelling channel, gully, and rill erosion with 3D photo-reconstruction, CONCEPTS, and AnnAGNPS

    Science.gov (United States)

    Taniguchi, Kristine; Gudiño, Napoleon; Biggs, Trent; Castillo, Carlos; Langendoen, Eddy; Bingner, Ron; Taguas, Encarnación; Liden, Douglas; Yuan, Yongping

    2015-04-01

    Several watersheds cross the US-Mexico boundary, resulting in trans-boundary environmental problems. Erosion in Tijuana, Mexico, increases the rate of sediment deposition in the Tijuana Estuary in the United States, altering the structure and function of the ecosystem. The well-being of residents in Tijuana is compromised by damage to infrastructure and homes built adjacent to stream channels, gully formation in dirt roads, and deposition of trash. We aim to understand the dominant source of sediment contributing to the sediment budget of the watershed (channel, gully, or rill erosion), where the hotspots of erosion are located, and what the impact of future planned and unplanned land use changes and Best Management Practices (BMPs) will be on sediment and storm flow. We will be using a mix of field methods, including 3D photo-reconstruction of stream channels, with two models, CONCEPTS and AnnAGNPS to constrain estimates of the sediment budget and impacts of land use change. Our research provides an example of how 3D photo-reconstruction and Structure from Motion (SfM) can be used to model channel evolution.

  18. SU-E-T-206: Improving Radiotherapy Toxicity Based On Artificial Neural Network (ANN) for Head and Neck Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Daniel D; Wernicke, A Gabriella; Nori, Dattatreyudu; Chao, KSC; Parashar, Bhupesh; Chang, Jenghwa [Weill Cornell Medical College, NY, NY (United States)

    2014-06-01

    Purpose/Objective(s): The aim of this study is to build the estimator of toxicity using artificial neural network (ANN) for head and neck cancer patients Materials/Methods: An ANN can combine variables into a predictive model during training and considered all possible correlations of variables. We constructed an ANN based on the data from 73 patients with advanced H and N cancer treated with external beam radiotherapy and/or chemotherapy at our institution. For the toxicity estimator we defined input data including age, sex, site, stage, pathology, status of chemo, technique of external beam radiation therapy (EBRT), length of treatment, dose of EBRT, status of post operation, length of follow-up, the status of local recurrences and distant metastasis. These data were digitized based on the significance and fed to the ANN as input nodes. We used 20 hidden nodes (for the 13 input nodes) to take care of the correlations of input nodes. For training ANN, we divided data into three subsets such as training set, validation set and test set. Finally, we built the estimator for the toxicity from ANN output. Results: We used 13 input variables including the status of local recurrences and distant metastasis and 20 hidden nodes for correlations. 59 patients for training set, 7 patients for validation set and 7 patients for test set and fed the inputs to Matlab neural network fitting tool. We trained the data within 15% of errors of outcome. In the end we have the toxicity estimation with 74% of accuracy. Conclusion: We proved in principle that ANN can be a very useful tool for predicting the RT outcomes for high risk H and N patients. Currently we are improving the results using cross validation.

  19. SU-E-T-206: Improving Radiotherapy Toxicity Based On Artificial Neural Network (ANN) for Head and Neck Cancer Patients

    International Nuclear Information System (INIS)

    Cho, Daniel D; Wernicke, A Gabriella; Nori, Dattatreyudu; Chao, KSC; Parashar, Bhupesh; Chang, Jenghwa

    2014-01-01

    Purpose/Objective(s): The aim of this study is to build the estimator of toxicity using artificial neural network (ANN) for head and neck cancer patients Materials/Methods: An ANN can combine variables into a predictive model during training and considered all possible correlations of variables. We constructed an ANN based on the data from 73 patients with advanced H and N cancer treated with external beam radiotherapy and/or chemotherapy at our institution. For the toxicity estimator we defined input data including age, sex, site, stage, pathology, status of chemo, technique of external beam radiation therapy (EBRT), length of treatment, dose of EBRT, status of post operation, length of follow-up, the status of local recurrences and distant metastasis. These data were digitized based on the significance and fed to the ANN as input nodes. We used 20 hidden nodes (for the 13 input nodes) to take care of the correlations of input nodes. For training ANN, we divided data into three subsets such as training set, validation set and test set. Finally, we built the estimator for the toxicity from ANN output. Results: We used 13 input variables including the status of local recurrences and distant metastasis and 20 hidden nodes for correlations. 59 patients for training set, 7 patients for validation set and 7 patients for test set and fed the inputs to Matlab neural network fitting tool. We trained the data within 15% of errors of outcome. In the end we have the toxicity estimation with 74% of accuracy. Conclusion: We proved in principle that ANN can be a very useful tool for predicting the RT outcomes for high risk H and N patients. Currently we are improving the results using cross validation

  20. Artificial neural network (ANN)-based prediction of depth filter loading capacity for filter sizing.

    Science.gov (United States)

    Agarwal, Harshit; Rathore, Anurag S; Hadpe, Sandeep Ramesh; Alva, Solomon J

    2016-11-01

    This article presents an application of artificial neural network (ANN) modelling towards prediction of depth filter loading capacity for clarification of a monoclonal antibody (mAb) product during commercial manufacturing. The effect of operating parameters on filter loading capacity was evaluated based on the analysis of change in the differential pressure (DP) as a function of time. The proposed ANN model uses inlet stream properties (feed turbidity, feed cell count, feed cell viability), flux, and time to predict the corresponding DP. The ANN contained a single output layer with ten neurons in hidden layer and employed a sigmoidal activation function. This network was trained with 174 training points, 37 validation points, and 37 test points. Further, a pressure cut-off of 1.1 bar was used for sizing the filter area required under each operating condition. The modelling results showed that there was excellent agreement between the predicted and experimental data with a regression coefficient (R 2 ) of 0.98. The developed ANN model was used for performing variable depth filter sizing for different clarification lots. Monte-Carlo simulation was performed to estimate the cost savings by using different filter areas for different clarification lots rather than using the same filter area. A 10% saving in cost of goods was obtained for this operation. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:1436-1443, 2016. © 2016 American Institute of Chemical Engineers.

  1. Graph-cut based discrete-valued image reconstruction.

    Science.gov (United States)

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  2. ANN-based calibration model of FTIR used in transformer online monitoring

    Science.gov (United States)

    Li, Honglei; Liu, Xian-yong; Zhou, Fangjie; Tan, Kexiong

    2005-02-01

    Recently, chromatography column and gas sensor have been used in online monitoring device of dissolved gases in transformer oil. But some disadvantages still exist in these devices: consumption of carrier gas, requirement of calibration, etc. Since FTIR has high accuracy, consume no carrier gas and require no calibration, the researcher studied the application of FTIR in such monitoring device. Experiments of "Flow gas method" were designed, and spectrum of mixture composed of different gases was collected with A BOMEM MB104 FTIR Spectrometer. A key question in the application of FTIR is that: the absorbance spectrum of 3 fault key gases, including C2H4, CH4 and C2H6, are overlapped seriously at 2700~3400cm-1. Because Absorbance Law is no longer appropriate, a nonlinear calibration model based on BP ANN was setup to in the quantitative analysis. The height absorbance of C2H4, CH4 and C2H6 were adopted as quantitative feature, and all the data were normalized before training the ANN. Computing results show that the calibration model can effectively eliminate the cross disturbance to measurement.

  3. The e/h method of energy reconstruction for combined calorimeter

    International Nuclear Information System (INIS)

    Kul'chitskij, Yu.A.; Kuz'min, M.V.; Vinogradov, V.B.

    1999-01-01

    The new simple method of the energy reconstruction for a combined calorimeter, which we called the e/h method, is suggested. It uses only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. The method has been tested on the basis of the 1996 test beam data of the ATLAS barrel combined calorimeter and demonstrated the correctness of the reconstruction of the mean values of energies. The obtained fractional energy resolution is [(58 ± 3)%/√E + (2.5 ± 0.3)%] O+ (1.7 ± 0.2) GeV/E. This algorithm can be used for the fast energy reconstruction in the first level trigger

  4. Exploration, Sampling, And Reconstruction of Free Energy Surfaces with Gaussian Process Regression.

    Science.gov (United States)

    Mones, Letif; Bernstein, Noam; Csányi, Gábor

    2016-10-11

    Practical free energy reconstruction algorithms involve three separate tasks: biasing, measuring some observable, and finally reconstructing the free energy surface from those measurements. In more than one dimension, adaptive schemes make it possible to explore only relatively low lying regions of the landscape by progressively building up the bias toward the negative of the free energy surface so that free energy barriers are eliminated. Most schemes use the final bias as their best estimate of the free energy surface. We show that large gains in computational efficiency, as measured by the reduction of time to solution, can be obtained by separating the bias used for dynamics from the final free energy reconstruction itself. We find that biasing with metadynamics, measuring a free energy gradient estimator, and reconstructing using Gaussian process regression can give an order of magnitude reduction in computational cost.

  5. Annual electricity consumption forecasting by neural network in high energy consuming industrial sectors

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Sohrabkhani, S.

    2008-01-01

    This paper presents an artificial neural network (ANN) approach for annual electricity consumption in high energy consumption industrial sectors. Chemicals, basic metals and non-metal minerals industries are defined as high energy consuming industries. It is claimed that, due to high fluctuations of energy consumption in high energy consumption industries, conventional regression models do not forecast energy consumption correctly and precisely. Although ANNs have been typically used to forecast short term consumptions, this paper shows that it is a more precise approach to forecast annual consumption in such industries. Furthermore, the ANN approach based on a supervised multi-layer perceptron (MLP) is used to show it can estimate the annual consumption with less error. Actual data from high energy consuming (intensive) industries in Iran from 1979 to 2003 is used to illustrate the applicability of the ANN approach. This study shows the advantage of the ANN approach through analysis of variance (ANOVA). Furthermore, the ANN forecast is compared with actual data and the conventional regression model through ANOVA to show its superiority. This is the first study to present an algorithm based on the ANN and ANOVA for forecasting long term electricity consumption in high energy consuming industries

  6. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  7. Anne Fine

    Directory of Open Access Journals (Sweden)

    Philip Gaydon

    2015-04-01

    Full Text Available An interview with Anne Fine with an introduction and aside on the role of children’s literature in our lives and development, and our adult perceptions of the suitability of childhood reading material. Since graduating from Warwick in 1968 with a BA in Politics and History, Anne Fine has written over fifty books for children and eight for adults, won the Carnegie Medal twice (for Goggle-Eyes in 1989 and Flour Babies in 1992, been a highly commended runner-up three times (for Bill’s New Frock in 1989, The Tulip Touch in 1996, and Up on Cloud Nine in 2002, been shortlisted for the Hans Christian Andersen Award (the highest recognition available to a writer or illustrator of children’s books, 1998, undertaken the positon of Children’s Laureate (2001-2003, and been awarded an OBE for her services to literature (2003. Warwick presented Fine with an Honorary Doctorate in 2005. Philip Gaydon’s interview with Anne Fine was recorded as part of the ‘Voices of the University’ oral history project, co-ordinated by Warwick’s Institute of Advanced Study.

  8. DESIGN OF A VISUAL INTERFACE FOR ANN BASED SYSTEMS

    Directory of Open Access Journals (Sweden)

    Ramazan BAYINDIR

    2008-01-01

    Full Text Available Artificial intelligence application methods have been used for control of many systems with parallel of technological development besides conventional control techniques. Increasing of artificial intelligence applications have required to education in this area. In this paper, computer based an artificial neural network (ANN software has been presented to learning and understanding of artificial neural networks. By means of the developed software, the training of the artificial neural network according to the inputs provided and a test action can be performed by changing the components such as iteration number, momentum factor, learning ratio, and efficiency function of the artificial neural networks. As a result of the study a visual education set has been obtained that can easily be adapted to the real time application.

  9. A deep learning-based reconstruction of cosmic ray-induced air showers

    Science.gov (United States)

    Erdmann, M.; Glombitza, J.; Walz, D.

    2018-01-01

    We describe a method of reconstructing air showers induced by cosmic rays using deep learning techniques. We simulate an observatory consisting of ground-based particle detectors with fixed locations on a regular grid. The detector's responses to traversing shower particles are signal amplitudes as a function of time, which provide information on transverse and longitudinal shower properties. In order to take advantage of convolutional network techniques specialized in local pattern recognition, we convert all information to the image-like grid of the detectors. In this way, multiple features, such as arrival times of the first particles and optimized characterizations of time traces, are processed by the network. The reconstruction quality of the cosmic ray arrival direction turns out to be competitive with an analytic reconstruction algorithm. The reconstructed shower direction, energy and shower depth show the expected improvement in resolution for higher cosmic ray energy.

  10. Ann Tenno salapaigad / Margit Tõnson

    Index Scriptorium Estoniae

    Tõnson, Margit, 1978-

    2011-01-01

    Fotograaf Ann Tenno aiandushuvist, pildistamisest maailma erinevates paikades. Uutest suundadest (fototöötlus, fractal art, soojuskaameraga pildistamine) tema loomingus. Katkendeid Ann Tenno 2010. aastal ilmunud proosaraamatust "Üle unepiiri"

  11. Machine learning-based dual-energy CT parametric mapping.

    Science.gov (United States)

    Su, Kuan-Hao; Kuo, Jung-Wen; Jordan, David W; Van Hedent, Steven; Klahr, Paul; Wei, Zhouping; Al Helo, Rose; Liang, Fan; Qian, Pengjiang; Pereira, Gisele C; Rassouli, Negin; Gilkeson, Robert C; Traughber, Bryan J; Cheng, Chee-Wai; Muzic, Raymond F

    2018-05-22

    The aim is to develop and evaluate machine learning methods for generating quantitative parametric maps of effective atomic number (Zeff), relative electron density (ρe), mean excitation energy (Ix), and relative stopping power (RSP) from clinical dual-energy CT data. The maps could be used for material identification and radiation dose calculation. Machine learning methods of historical centroid (HC), random forest (RF), and artificial neural networks (ANN) were used to learn the relationship between dual-energy CT input data and ideal output parametric maps calculated for phantoms from the known compositions of 13 tissue substitutes. After training and model selection steps, the machine learning predictors were used to generate parametric maps from independent phantom and patient input data. Precision and accuracy were evaluated using the ideal maps. This process was repeated for a range of exposure doses, and performance was compared to that of the clinically-used dual-energy, physics-based method which served as the reference. The machine learning methods generated more accurate and precise parametric maps than those obtained using the reference method. Their performance advantage was particularly evident when using data from the lowest exposure, one-fifth of a typical clinical abdomen CT acquisition. The RF method achieved the greatest accuracy. In comparison, the ANN method was only 1% less accurate but had much better computational efficiency than RF, being able to produce parametric maps in 15 seconds. Machine learning methods outperformed the reference method in terms of accuracy and noise tolerance when generating parametric maps, encouraging further exploration of the techniques. Among the methods we evaluated, ANN is the most suitable for clinical use due to its combination of accuracy, excellent low-noise performance, and computational efficiency. . © 2018 Institute of Physics and Engineering in

  12. Development of a new software tool, based on ANN technology, in neutron spectrometry and dosimetry research

    International Nuclear Information System (INIS)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R.

    2007-01-01

    Artificial Intelligence is a branch of study which enhances the capability of computers by giving them human-like intelligence. The brain architecture has been extensively studied and attempts have been made to emulate it as in the Artificial Neural Network technology. A large variety of neural network architectures have been developed and they have gained wide-spread popularity over the last few decades. Their application is considered as a substitute for many classical techniques that have been used for many years, as in the case of neutron spectrometry and dosimetry research areas. In previous works, a new approach called Robust Design of Artificial Neural network was applied to build an ANN topology capable to solve the neutron spectrometry and dosimetry problems within the Mat lab programming environment. In this work, the knowledge stored at Mat lab ANN's synaptic weights was extracted in order to develop for first time a customized software application based on ANN technology, which is proposed to be used in the neutron spectrometry and simultaneous dosimetry fields. (Author)

  13. Development of a new software tool, based on ANN technology, in neutron spectrometry and dosimetry research

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz R, J.M.; Martinez B, M.R.; Vega C, H.R. [Universidad Autonoma de Zacatecas, Av. Ramon Lopez Velarde 801, A.P. 336, 98000 Zacatecas (Mexico)

    2007-07-01

    Artificial Intelligence is a branch of study which enhances the capability of computers by giving them human-like intelligence. The brain architecture has been extensively studied and attempts have been made to emulate it as in the Artificial Neural Network technology. A large variety of neural network architectures have been developed and they have gained wide-spread popularity over the last few decades. Their application is considered as a substitute for many classical techniques that have been used for many years, as in the case of neutron spectrometry and dosimetry research areas. In previous works, a new approach called Robust Design of Artificial Neural network was applied to build an ANN topology capable to solve the neutron spectrometry and dosimetry problems within the Mat lab programming environment. In this work, the knowledge stored at Mat lab ANN's synaptic weights was extracted in order to develop for first time a customized software application based on ANN technology, which is proposed to be used in the neutron spectrometry and simultaneous dosimetry fields. (Author)

  14. Designing the input vector to ANN-based models for short-term load forecast in electricity distribution systems

    International Nuclear Information System (INIS)

    Santos, P.J.; Martins, A.G.; Pires, A.J.

    2007-01-01

    The present trend to electricity market restructuring increases the need for reliable short-term load forecast (STLF) algorithms, in order to assist electric utilities in activities such as planning, operating and controlling electric energy systems. Methodologies such as artificial neural networks (ANN) have been widely used in the next hour load forecast horizon with satisfactory results. However, this type of approach has had some shortcomings. Usually, the input vector (IV) is defined in a arbitrary way, mainly based on experience, on engineering judgment criteria and on concern about the ANN dimension, always taking into consideration the apparent correlations within the available endogenous and exogenous data. In this paper, a proposal is made of an approach to define the IV composition, with the main focus on reducing the influence of trial-and-error and common sense judgments, which usually are not based on sufficient evidence of comparative advantages over previous alternatives. The proposal includes the assessment of the strictly necessary instances of the endogenous variable, both from the point of view of the contiguous values prior to the forecast to be made, and of the past values representing the trend of consumption at homologous time intervals of the past. It also assesses the influence of exogenous variables, again limiting their presence at the IV to the indispensable minimum. A comparison is made with two alternative IV structures previously proposed in the literature, also applied to the distribution sector. The paper is supported by a real case study at the distribution sector. (author)

  15. Application of machine learning techniques to lepton energy reconstruction in water Cherenkov detectors

    Science.gov (United States)

    Drakopoulou, E.; Cowan, G. A.; Needham, M. D.; Playfer, S.; Taani, M.

    2018-04-01

    The application of machine learning techniques to the reconstruction of lepton energies in water Cherenkov detectors is discussed and illustrated for TITUS, a proposed intermediate detector for the Hyper-Kamiokande experiment. It is found that applying these techniques leads to an improvement of more than 50% in the energy resolution for all lepton energies compared to an approach based upon lookup tables. Machine learning techniques can be easily applied to different detector configurations and the results are comparable to likelihood-function based techniques that are currently used.

  16. Three-Dimensional Reconstruction and Solar Energy Potential Estimation of Buildings

    Science.gov (United States)

    Chen, Y.; Li, M.; Cheng, L.; Xu, H.; Li, S.; Liu, X.

    2017-12-01

    In the context of the construction of low-carbon cities, green cities and eco-cities, the ability of the airborne and mobile LiDAR should be explored in urban renewable energy research. As the main landscape in urban environment, buildings have large regular envelopes could receive a huge amount of solar radiation. In this study, a relatively complete calculation scheme about building roof and façade solar utilization potential is proposed, using building three-dimensional geometric feature information. For measuring the city-level building solar irradiance, the precise three-dimensional building roof and façade models should be first reconstructed from the airborne and mobile LiDAR, respectively. In order to obtaining the precise geometric structure of building facades from the mobile LiDAR data, a new method for structure detection and the three-dimensional reconstruction of building façades from mobile LiDAR data is proposed. The method consists of three steps: the preprocessing of façade points, the detection of façade structure, the restoration and reconstruction of building façade. As a result, the reconstruction method can effectively deal with missing areas caused by occlusion, viewpoint limitation, and uneven point density, as well as realizing the highly complete 3D reconstruction of a building façade. Furthermore, the window areas can be excluded for more accurate estimation of solar utilization potential. After then, the solar energy utilization potential of global building roofs and facades is estimate by using the solar irradiance model, which combine the analysis of the building shade and sky diffuse, based on the analysis of the geometrical structure of buildings.

  17. Use of artificial neural networks for transport energy demand modeling

    International Nuclear Information System (INIS)

    Murat, Yetis Sazi; Ceylan, Halim

    2006-01-01

    The paper illustrates an artificial neural network (ANN) approach based on supervised neural networks for the transport energy demand forecasting using socio-economic and transport related indicators. The ANN transport energy demand model is developed. The actual forecast is obtained using a feed forward neural network, trained with back propagation algorithm. In order to investigate the influence of socio-economic indicators on the transport energy demand, the ANN is analyzed based on gross national product (GNP), population and the total annual average veh-km along with historical energy data available from 1970 to 2001. Comparing model predictions with energy data in testing period performs the model validation. The projections are made with two scenarios. It is obtained that the ANN reflects the fluctuation in historical data for both dependent and independent variables. The results obtained bear out the suitability of the adopted methodology for the transport energy-forecasting problem

  18. Three-dimensional Reconstruction of Block Shape Irregularity and its Effects on Block Impacts Using an Energy-Based Approach

    Science.gov (United States)

    Zhang, Yulong; Liu, Zaobao; Shi, Chong; Shao, Jianfu

    2018-04-01

    This study is devoted to three-dimensional modeling of small falling rocks in block impact analysis in energy view using the particle flow method. The restitution coefficient of rockfall collision is introduced from the energy consumption mechanism to describe rockfall-impacting properties. Three-dimensional reconstruction of falling block is conducted with the help of spherical harmonic functions that have satisfactory mathematical properties such as orthogonality and rotation invariance. Numerical modeling of the block impact to the bedrock is analyzed with both the sphere-simplified model and the 3D reconstructed model. Comparisons of the obtained results suggest that the 3D reconstructed model is advantageous in considering the combination effects of rockfall velocity and rotations during colliding process. Verification of the modeling is carried out with the results obtained from other experiments. In addition, the effects of rockfall morphology, surface characteristics, velocity, and volume, colliding damping and relative angle are investigated. A three-dimensional reconstruction modulus of falling blocks is to be developed and incorporated into the rockfall simulation tools in order to extend the modeling results at block scale to slope scale.

  19. Annely Peebo kutsus presidendi kontserdile / Maria Ulfsak

    Index Scriptorium Estoniae

    Ulfsak, Maria, 1981-

    2003-01-01

    Laulja Anneli Peebo kohtus president Arnold Rüütliga, et anda üle kutse Andrea Bocelli ja Annely Peebo ühiskontserdile. Vt. samas: Andrea Bocelli ja Annely Peebo kontsert Tallinna lauluväljakul 23. augustil; Andrea Bocelli

  20. A computationally efficient OMP-based compressed sensing reconstruction for dynamic MRI

    International Nuclear Information System (INIS)

    Usman, M; Prieto, C; Schaeffter, T; Batchelor, P G; Odille, F; Atkinson, D

    2011-01-01

    Compressed sensing (CS) methods in MRI are computationally intensive. Thus, designing novel CS algorithms that can perform faster reconstructions is crucial for everyday applications. We propose a computationally efficient orthogonal matching pursuit (OMP)-based reconstruction, specifically suited to cardiac MR data. According to the energy distribution of a y-f space obtained from a sliding window reconstruction, we label the y-f space as static or dynamic. For static y-f space images, a computationally efficient masked OMP reconstruction is performed, whereas for dynamic y-f space images, standard OMP reconstruction is used. The proposed method was tested on a dynamic numerical phantom and two cardiac MR datasets. Depending on the field of view composition of the imaging data, compared to the standard OMP method, reconstruction speedup factors ranging from 1.5 to 2.5 are achieved. (note)

  1. Muon Energy Reconstruction Through the Multiple Scattering Method in the NO$\\mathrm{\

    Energy Technology Data Exchange (ETDEWEB)

    Psihas Olmedo, Silvia Fernanda [Univ. of Minnesota, Duluth, MN (United States)

    2013-06-01

    Neutrino energy measurements are a crucial component in the experimental study of neutrino oscillations. These measurements are done through the reconstruction of neutrino interactions and energy measurements of their products. This thesis presents the development of a technique to reconstruct the energy of muons from neutrino interactions in the NO$\\mathrm{\

  2. Muon Energy Reconstruction Through the Multiple Scattering Method in the NO$\\mathrm{\

    Energy Technology Data Exchange (ETDEWEB)

    Psihas Olmedo, Silvia Fernanda [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)

    2015-01-01

    Neutrino energy measurements are a crucial component in the experimental study of neutrino oscillations. These measurements are done through the reconstruction of neutrino interactions and energy measurements of their products. This thesis presents the development of a technique to reconstruct the energy of muons from neutrino interactions in the NO$\\mathrm{\

  3. Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN

    International Nuclear Information System (INIS)

    Peter, Josephine; Doloi, B.; Bhattacharyya, B.

    2011-01-01

    The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actual experimental observations.

  4. Comprehensive Forecast of Urban Water-Energy Demand Based on a Neural Network Model

    Directory of Open Access Journals (Sweden)

    Ziyi Yin

    2018-03-01

    Full Text Available Water-energy nexus has been a popular topic of rese arch in recent years. The relationships between the demand for water resources and energy are intense and closely connected in urban areas. The primary, secondary, and tertiary industry gross domestic product (GDP, the total population, the urban population, annual precipitation, agricultural and industrial water consumption, tap water supply, the total discharge of industrial wastewater, the daily sewage treatment capacity, total and domestic electricity consumption, and the consumption of coal in industrial enterprises above the designed size were chosen as input indicators. A feedforward artificial neural network model (ANN based on a back-propagation algorithm with two hidden layers was constructed to combine urban water resources with energy demand. This model used historical data from 1991 to 2016 from Wuxi City, eastern China. Furthermore, a multiple linear regression model (MLR was introduced for comparison with the ANN. The results show the following: (a The mean relative error values of the forecast and historical urban water-energy demands are 1.58 % and 2.71%, respectively; (b The predicted water-energy demand value for 2020 is 4.843 billion cubic meters and 47.561 million tons of standard coal equivalent; (c The predicted water-energy demand value in the year 2030 is 5.887 billion cubic meters and 60.355 million tons of standard coal equivalent; (d Compared with the MLR, the ANN performed better in fitting training data, which achieved a more satisfactory accuracy and may provide a reference for urban water-energy supply planning decisions.

  5. Flow forecast by SWAT model and ANN in Pracana basin, Portugal

    NARCIS (Netherlands)

    Demirel, M.C.; Venancio, Anabela; Kahya, Ercan

    2009-01-01

    This study provides a unique opportunity to analyze the issue of flow forecast based on the soil and water assessment tool (SWAT) and artificial neural network (ANN) models. In last two decades, the ANNs have been extensively applied to various water resources system problems. In this study, the

  6. Algorithm of hadron energy reconstruction for combined calorimeters in the DELPHI detector

    International Nuclear Information System (INIS)

    Gotra, Yu.N.; Tsyganov, E.N.; Zimin, N.I.; Zinchenko, A.I.

    1989-01-01

    The algorithm of hadron energy reconstruction from responses of electromagnetic and hadron calorimeters is described. The investigations have been carried out using the full-scale prototype of the hadron calorimeter cylindrical part modules. The supposed algorithm allows one to improve energy resolution by 5-7% with conserving the linearly of reconstructed hadron energy. 5 refs.; 4 figs.; 1 tab

  7. Polynomials, Riemann surfaces, and reconstructing missing-energy events

    CERN Document Server

    Gripaios, Ben; Webber, Bryan

    2011-01-01

    We consider the problem of reconstructing energies, momenta, and masses in collider events with missing energy, along with the complications introduced by combinatorial ambiguities and measurement errors. Typically, one reconstructs more than one value and we show how the wrong values may be correlated with the right ones. The problem has a natural formulation in terms of the theory of Riemann surfaces. We discuss examples including top quark decays in the Standard Model (relevant for top quark mass measurements and tests of spin correlation), cascade decays in models of new physics containing dark matter candidates, decays of third-generation leptoquarks in composite models of electroweak symmetry breaking, and Higgs boson decay into two tau leptons.

  8. Prediction of temperature and HAZ in thermal-based processes with Gaussian heat source by a hybrid GA-ANN model

    Science.gov (United States)

    Fazli Shahri, Hamid Reza; Mahdavinejad, Ramezanali

    2018-02-01

    Thermal-based processes with Gaussian heat source often produce excessive temperature which can impose thermally-affected layers in specimens. Therefore, the temperature distribution and Heat Affected Zone (HAZ) of materials are two critical factors which are influenced by different process parameters. Measurement of the HAZ thickness and temperature distribution within the processes are not only difficult but also expensive. This research aims at finding a valuable knowledge on these factors by prediction of the process through a novel combinatory model. In this study, an integrated Artificial Neural Network (ANN) and genetic algorithm (GA) was used to predict the HAZ and temperature distribution of the specimens. To end this, a series of full factorial design of experiments were conducted by applying a Gaussian heat flux on Ti-6Al-4 V at first, then the temperature of the specimen was measured by Infrared thermography. The HAZ width of each sample was investigated through measuring the microhardness. Secondly, the experimental data was used to create a GA-ANN model. The efficiency of GA in design and optimization of the architecture of ANN was investigated. The GA was used to determine the optimal number of neurons in hidden layer, learning rate and momentum coefficient of both output and hidden layers of ANN. Finally, the reliability of models was assessed according to the experimental results and statistical indicators. The results demonstrated that the combinatory model predicted the HAZ and temperature more effective than a trial-and-error ANN model.

  9. Reconstruction, Energy Calibration, and Identification of Hadronically Decaying Tau Leptons in the ATLAS Experiment for Run-2 of the LHC

    CERN Document Server

    The ATLAS collaboration

    2015-01-01

    The reconstruction algorithm, energy calibration, and identification methods for hadronically decaying tau leptons in ATLAS used at the start of Run-2 of the Large Hadron Collider are described in this note. All algorithms have been optimised for Run-2 conditions. The energy calibration relies on Monte Carlo samples with hadronic tau lepton decays, and applies multiplicative factors based on the pT of the reconstructed tau lepton to the energy measurements in the calorimeters. The identification employs boosted decision trees. Systematic uncertainties on the energy scale, reconstruction efficiency and identification efficiency of hadronically decaying tau leptons are determined using Monte Carlo samples that simulate varying conditions.

  10. Hadronic energy reconstruction in the CALICE combined calorimeter system

    Energy Technology Data Exchange (ETDEWEB)

    Israeli, Yasmine [Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany); Collaboration: CALICE-D-Collaboration

    2016-07-01

    Future linear electron-positron colliders, ILC and CLIC, aim for precision measurements and discoveries beyond and complementary to the program of the LHC. For this purpose, detectors with the capability for sophisticated reconstruction of final states with energy resolutions substantially beyond the current state of the art are being designed. The CALICE collaboration develops highly granular calorimeters for future colliders, among them silicon-tungsten electromagnetic calorimeters and hadronic calorimeters with scintillators read out by SiPMs. Such a combined system was tested with hadrons at CERN as well as at Fermilab. In this contribution, we report on the energy reconstruction in the combined setup, which requires different intercalibration factors to account for the varying longitudinal sampling of sub-detectors. Software compensation methods are applied to improve the energy resolution and to compensate for the different energy deposit of hadronic and electromagnetic showers.

  11. Japan's post-Fukushima reconstruction: A case study for implementation of sustainable energy technologies

    International Nuclear Information System (INIS)

    Nesheiwat, Julia; Cross, Jeffrey S.

    2013-01-01

    Following World War II, Japan miraculously developed into an economic powerhouse and a model of energy efficiency among developed countries. This lasted more than 65 years until the Northeastern Japan earthquake and tsunami induced nuclear crisis of March 2011 brought Japan to an existential crossroads. Instead of implementing its plans to increase nuclear power generation capacity from thirty percent to fifty percent, Japan shut-down all fifty-four nuclear reactors for safety checks and stress-checks (two have since been restarted), resulting in reduced power generation during the summer of 2012. The reconstruction of Northeastern Japan approaches at a time when the world is grappling with a transition to sustainable energy technologies—one that will require substantial investment but one that would result in fundamental changes in infrastructure and energy efficiency. Certain reconstruction methods can be inappropriate in the social, cultural and climatic context of disaster affected areas. Thus, how can practitioners employ sustainable reconstructions which better respond to local housing needs and availability of natural energy resources without a framework in place? This paper aims at sensitizing policy-makers and stakeholders involved in post disaster reconstruction by recognizing advantages of deploying sustainable energy technologies, to reduce dependence of vulnerable communities on external markets. - Highlights: • We examine the energy challenges faced by Japan in the aftermath of Fukushima. • We identify policy measures for the use of energy technologies applicable to disaster prone nations. • We evaluate the potential for renewable energy to support reduced reliance on nuclear energy in Japan. • We model scenarios for eco-towns and smart-cities in post-disaster reconstruction. • We assess the role of culture in formulating energy policy in post-disaster reconstruction

  12. Evaluation and prediction of solar radiation for energy management based on neural networks

    Science.gov (United States)

    Aldoshina, O. V.; Van Tai, Dinh

    2017-08-01

    Currently, there is a high rate of distribution of renewable energy sources and distributed power generation based on intelligent networks; therefore, meteorological forecasts are particularly useful for planning and managing the energy system in order to increase its overall efficiency and productivity. The application of artificial neural networks (ANN) in the field of photovoltaic energy is presented in this article. Implemented in this study, two periodically repeating dynamic ANS, that are the concentration of the time delay of a neural network (CTDNN) and the non-linear autoregression of a network with exogenous inputs of the NAEI, are used in the development of a model for estimating and daily forecasting of solar radiation. ANN show good productivity, as reliable and accurate models of daily solar radiation are obtained. This allows to successfully predict the photovoltaic output power for this installation. The potential of the proposed method for controlling the energy of the electrical network is shown using the example of the application of the NAEI network for predicting the electric load.

  13. Mastectomy Skin Necrosis After Breast Reconstruction: A Comparative Analysis Between Autologous Reconstruction and Implant-Based Reconstruction.

    Science.gov (United States)

    Sue, Gloria R; Lee, Gordon K

    2018-05-01

    Mastectomy skin necrosis is a significant problem after breast reconstruction. We sought to perform a comparative analysis on this complication between patients undergoing autologous breast reconstruction and patients undergoing 2-stage expander implant breast reconstruction. A retrospective review was performed on consecutive patients undergoing autologous breast reconstruction or 2-stage expander implant breast reconstruction by the senior author from 2006 through 2015. Patient demographic factors including age, body mass index, history of diabetes, history of smoking, and history of radiation to the breast were collected. Our primary outcome measure was mastectomy skin necrosis. Fisher exact test was used for statistical analysis between the 2 patient cohorts. The treatment patterns of mastectomy skin necrosis were then analyzed. We identified 204 patients who underwent autologous breast reconstruction and 293 patients who underwent 2-stage expander implant breast reconstruction. Patients undergoing autologous breast reconstruction were older, heavier, more likely to have diabetes, and more likely to have had prior radiation to the breast compared with patients undergoing implant-based reconstruction. The incidence of mastectomy skin necrosis was 30.4% of patients in the autologous group compared with only 10.6% of patients in the tissue expander group (P care in the autologous group, only 3.2% were treated with local wound care in the tissue expander group (P skin necrosis is significantly more likely to occur after autologous breast reconstruction compared with 2-stage expander implant-based breast reconstruction. Patients with autologous reconstructions are more readily treated with local wound care compared with patients with tissue expanders, who tended to require operative treatment of this complication. Patients considering breast reconstruction should be counseled appropriately regarding the differences in incidence and management of mastectomy skin

  14. A page is turned with the departure of Anne-Sylvie Catherin

    CERN Multimedia

    Staff Association

    2016-01-01

    The Staff Association wants to thank Anne Sylvie Catherin for her achievements during her career at CERN, and in particular during her mandate of head of the Human Resources Department. Anne-Sylvie Catherin arrived at CERN as a lawyer specialized in labor law of International Organizations, and she brought along her knowledge, as well as an unparalleled energy and professionalism. The Staff Association has particularly appreciated her collaborative approach during discussions in the concertation process. This attitude has clearly contributed to the implementation of significant change in the management of human resources, while preserving social peace. We expect that the person who will succeed Anne-Sylvie Catherin will have the same constructive attitude, in respect of the concertation process, as well as a continuity in the definition and implementation of HR processes. Finally, we hope that the new head of HR will be able to develop a long-term vision for the Organization and its staff, measuring to the vi...

  15. RegnANN: Reverse Engineering Gene Networks using Artificial Neural Networks.

    Directory of Open Access Journals (Sweden)

    Marco Grimaldi

    Full Text Available RegnANN is a novel method for reverse engineering gene networks based on an ensemble of multilayer perceptrons. The algorithm builds a regressor for each gene in the network, estimating its neighborhood independently. The overall network is obtained by joining all the neighborhoods. RegnANN makes no assumptions about the nature of the relationships between the variables, potentially capturing high-order and non linear dependencies between expression patterns. The evaluation focuses on synthetic data mimicking plausible submodules of larger networks and on biological data consisting of submodules of Escherichia coli. We consider Barabasi and Erdös-Rényi topologies together with two methods for data generation. We verify the effect of factors such as network size and amount of data to the accuracy of the inference algorithm. The accuracy scores obtained with RegnANN is methodically compared with the performance of three reference algorithms: ARACNE, CLR and KELLER. Our evaluation indicates that RegnANN compares favorably with the inference methods tested. The robustness of RegnANN, its ability to discover second order correlations and the agreement between results obtained with this new methods on both synthetic and biological data are promising and they stimulate its application to a wider range of problems.

  16. ANN based controller for three phase four leg shunt active filter for power quality improvement

    Directory of Open Access Journals (Sweden)

    J. Jayachandran

    2016-03-01

    Full Text Available In this paper, an artificial neural network (ANN based one cycle control (OCC strategy is proposed for the DSTATCOM shunted across the load in three phase four wire distribution system. The proposed control strategy mitigates harmonic/reactive currents, ensures balanced and sinusoidal source current from the supply mains that are nearly in phase with the supply voltage and compensates neutral current under varying source and load conditions. The proposed control strategy is superior over conventional methods as it eliminates, the sensors needed for sensing load current and coupling inductor current, in addition to the multipliers and the calculation of reference currents. ANN controllers are implemented to maintain voltage across the capacitor and as a compensator to compensate neutral current. The DSTATCOM performance is validated for all possible conditions of source and load by simulation using MATLAB software and simulation results prove the efficacy of the proposed control over conventional control strategy.

  17. Super capacitor modeling with artificial neural network (ANN)

    Energy Technology Data Exchange (ETDEWEB)

    Marie-Francoise, J.N.; Gualous, H.; Berthon, A. [Universite de Franche-Comte, Lab. en Electronique, Electrotechnique et Systemes (L2ES), UTBM, INRETS (LRE T31) 90 - Belfort (France)

    2004-07-01

    This paper presents super-capacitors modeling using Artificial Neural Network (ANN). The principle consists on a black box nonlinear multiple inputs single output (MISO) model. The system inputs are temperature and current, the output is the super-capacitor voltage. The learning and the validation of the ANN model from experimental charge and discharge of super-capacitor establish the relationship between inputs and output. The learning and the validation of the ANN model use experimental results of 2700 F, 3700 F and a super-capacitor pack. Once the network is trained, the ANN model can predict the super-capacitor behaviour with temperature variations. The update parameters of the ANN model are performed thanks to Levenberg-Marquardt method in order to minimize the error between the output of the system and the predicted output. The obtained results with the ANN model of super-capacitor and experimental ones are in good agreement. (authors)

  18. Design of an artificial neural network, with the topology oriented to the reconstruction of neutron spectra

    International Nuclear Information System (INIS)

    Arteaga A, T.; Ortiz R, J.M.; Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado S, G.A.

    2006-01-01

    People that live in high places respect to the sea level, in latitudes far from the equator or that they travel by plane, they are exposed to atmospheres of high radiation generated by the cosmic rays. Another atmosphere with radiation is the medical equipment, particle accelerators and nuclear reactors. The evaluation of the biological risk for neutron radiation requires an appropriate and sure dosimetry. A commonly used system is the Bonner Sphere Spectrometer (EEB) with the purpose of reconstructing the spectrum that is important because the equivalent dose for neutrons depends strongly on its energy. The count rates obtained in each sphere are treated, in most of the cases, for iterative methods, Monte Carlo or Maximum Entropy. Each one of them has difficulties that it motivates to the development of complementary procedures. Recently it has been used Artificial Neural Networks, ANN) and not yet conclusive results have been obtained. In this work it was designed an ANN to obtain the neutron energy spectrum neutrons starting from the counting rate of count of an EEB. The ANN was trained with 129 reference spectra obtained of the IAEA (1990, 2001), 24 were built as defined energy, including isotopic sources of neutrons of reference and operational, of accelerators, reactors, mathematical functions, and of defined energy with several peaks. The spectrum was transformed from lethargy units to energy and were reaccommodated in 31 energies using the Monte Carlo code 4C. The reaccommodated spectra and the response matrix UTA4 were used to calculate the prospective count rates in the EEB. These rates were used as entrance and its respective spectrum was used as output during the net training. The net design is Retropropagation type with 5 layers of 7, 140, 140, 140 and 31 neurons, transfer function logsig, tansig, logsig, logsig, logsig respectively. Training algorithm, traingdx. After the training, the net was proven with a group of training spectra and others that

  19. Reconstruction of the interaction term between dark matter and dark energy using SNe Ia

    Energy Technology Data Exchange (ETDEWEB)

    Solano, Freddy Cueva; Nucamendi, Ulises, E-mail: freddy@ifm.umich.mx, E-mail: ulises@ifm.umich.mx [Instituto de Física y Matemáticas, Universidad Michoacana de San Nicolás de Hidalgo, Edificio C-3, Ciudad Universitaria, CP. 58040, Morelia, Michoacán (Mexico)

    2012-04-01

    We apply a parametric reconstruction method to a homogeneous, isotropic and spatially flat Friedmann-Robertson-Walker (FRW) cosmological model filled of a fluid of dark energy (DE) with constant equation of state (EOS) parameter interacting with dark matter (DM)\\@. The reconstruction method is based on expansions of the general interaction term and the relevant cosmological variables in terms of Chebyshev polynomials which form a complete set orthonormal functions. This interaction term describes an exchange of energy flow between the DE and DM within dark sector. To show how the method works we do the reconstruction of the interaction function expanding it in terms of only the first six Chebyshev polynomials and obtain the best estimation for the coefficients of the expansion assuming three models: (a) a DE equation of the state parameter w = −1 (an interacting cosmological Λ), (b) a DE equation of the state parameter w = constant with a dark matter density parameter fixed, (c) a DE equation of the state parameter w = constant with a free constant dark matter density parameter to be estimated, and using the Union2 SNe Ia data set from ''The Supernova Cosmology Project'' (SCP) composed by 557 type Ia supernovae. In both cases, the preliminary reconstruction shows that in the best scenario there exist the possibility of a crossing of the noninteracting line Q = 0 in the recent past within the 1σ and 2σ errors from positive values at early times to negative values at late times. This means that, in this reconstruction, there is an energy transfer from DE to DM at early times and an energy transfer from DM to DE at late times. We conclude that this fact is an indication of the possible existence of a crossing behavior in a general interaction coupling between dark components.

  20. Novel Formulation of Adaptive MPC as EKF Using ANN Model: Multiproduct Semibatch Polymerization Reactor Case Study.

    Science.gov (United States)

    Kamesh, Reddi; Rani, Kalipatnapu Yamuna

    2017-12-01

    In this paper, a novel formulation for nonlinear model predictive control (MPC) has been proposed incorporating the extended Kalman filter (EKF) control concept using a purely data-driven artificial neural network (ANN) model based on measurements for supervisory control. The proposed scheme consists of two modules focusing on online parameter estimation based on past measurements and control estimation over control horizon based on minimizing the deviation of model output predictions from set points along the prediction horizon. An industrial case study for temperature control of a multiproduct semibatch polymerization reactor posed as a challenge problem has been considered as a test bed to apply the proposed ANN-EKFMPC strategy at supervisory level as a cascade control configuration along with proportional integral controller [ANN-EKFMPC with PI (ANN-EKFMPC-PI)]. The proposed approach is formulated incorporating all aspects of MPC including move suppression factor for control effort minimization and constraint-handling capability including terminal constraints. The nominal stability analysis and offset-free tracking capabilities of the proposed controller are proved. Its performance is evaluated by comparison with a standard MPC-based cascade control approach using the same adaptive ANN model. The ANN-EKFMPC-PI control configuration has shown better controller performance in terms of temperature tracking, smoother input profiles, as well as constraint-handling ability compared with the ANN-MPC with PI approach for two products in summer and winter. The proposed scheme is found to be versatile although it is based on a purely data-driven model with online parameter estimation.

  1. A framelet-based iterative maximum-likelihood reconstruction algorithm for spectral CT

    Science.gov (United States)

    Wang, Yingmei; Wang, Ge; Mao, Shuwei; Cong, Wenxiang; Ji, Zhilong; Cai, Jian-Feng; Ye, Yangbo

    2016-11-01

    Standard computed tomography (CT) cannot reproduce spectral information of an object. Hardware solutions include dual-energy CT which scans the object twice in different x-ray energy levels, and energy-discriminative detectors which can separate lower and higher energy levels from a single x-ray scan. In this paper, we propose a software solution and give an iterative algorithm that reconstructs an image with spectral information from just one scan with a standard energy-integrating detector. The spectral information obtained can be used to produce color CT images, spectral curves of the attenuation coefficient μ (r,E) at points inside the object, and photoelectric images, which are all valuable imaging tools in cancerous diagnosis. Our software solution requires no change on hardware of a CT machine. With the Shepp-Logan phantom, we have found that although the photoelectric and Compton components were not perfectly reconstructed, their composite effect was very accurately reconstructed as compared to the ground truth and the dual-energy CT counterpart. This means that our proposed method has an intrinsic benefit in beam hardening correction and metal artifact reduction. The algorithm is based on a nonlinear polychromatic acquisition model for x-ray CT. The key technique is a sparse representation of iterations in a framelet system. Convergence of the algorithm is studied. This is believed to be the first application of framelet imaging tools to a nonlinear inverse problem.

  2. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Directory of Open Access Journals (Sweden)

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  3. Hadron Energy Reconstruction for ATLAS Barrel Combined Calorimeter Using Non-Parametrical Method

    CERN Document Server

    Kulchitskii, Yu A

    2000-01-01

    Hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter in the framework of the non-parametrical method is discussed. The non-parametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to fast energy reconstruction in a first level trigger. The reconstructed mean values of the hadron energies are within \\pm1% of the true values and the fractional energy resolution is [(58\\pm 3)%{\\sqrt{GeV}}/\\sqrt{E}+(2.5\\pm0.3)%]\\bigoplus(1.7\\pm0.2) GeV/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74\\pm0.04. Results of a study of the longitudinal hadronic shower development are also presented.

  4. Anne-Mette Langes plan for ADHD kongressen

    DEFF Research Database (Denmark)

    Lange, Anne-Mette

    2017-01-01

    http://medicinsktidsskrift.dk/behandlinger/psykiatri/699-anne-mette-langes-plan-for-adhd-kongressen.html......http://medicinsktidsskrift.dk/behandlinger/psykiatri/699-anne-mette-langes-plan-for-adhd-kongressen.html...

  5. 2011 : Qu'elle année !

    CERN Multimedia

    Staff Association

    2012-01-01

    « Quelle année ! Et quelle fin d’année ! La star de l’année a été le LHC, avec ses expériences, qui une fois de plus ont été sous les feux de la rampe. Mais on doit aussi citer toute une troupe d’acteurs importants, dans des domaines aussi différents que l’antimatière et l’expérience CLOUD. » Voilà ce que le Directeur général nous a écrit le 20 décembre dans son message avec ses vœux de fin d’année. Sans oublier, bien sûr, les fameux neutrinos hyperrapides vers Gran Sasso qui ont mis le CERN sur le devant de la scène mondiale. Ces succès qui font la fierté et la force de l’Organisation ont été rendus possibles «&...

  6. A Smart Forecasting Approach to District Energy Management

    Directory of Open Access Journals (Sweden)

    Baris Yuce

    2017-07-01

    Full Text Available This study presents a model for district-level electricity demand forecasting using a set of Artificial Neural Networks (ANNs (parallel ANNs based on current energy loads and social parameters such as occupancy. A comprehensive sensitivity analysis is conducted to select the inputs of the ANN by considering external weather conditions, occupancy type, main income providers’ employment status and related variables for the fuel poverty index. Moreover, a detailed parameter tuning is conducted using various configurations for each individual ANN. The study also demonstrates the strength of the parallel ANN models in different seasons of the years. In the proposed district level energy forecasting model, the training and testing stages of parallel ANNs utilise dataset of a group of six buildings. The aim of each individual ANN is to predict electricity consumption and the aggregated demand in sub-hourly time-steps. The inputs of each ANN are determined using Principal Component Analysis (PCA and Multiple Regression Analysis (MRA methods. The accuracy and consistency of ANN predictions are evaluated using Pearson coefficient and average percentage error, and against four seasons: winter, spring, summer, and autumn. The lowest prediction error for the aggregated demand is about 4.51% for winter season and the largest prediction error is found as 8.82% for spring season. The results demonstrate that peak demand can be predicted successfully, and utilised to forecast and provide demand-side flexibility to the aggregators for effective management of district energy systems.

  7. MO-DE-207A-05: Dictionary Learning Based Reconstruction with Low-Rank Constraint for Low-Dose Spectral CT

    International Nuclear Information System (INIS)

    Xu, Q; Liu, H; Xing, L; Yu, H; Wang, G

    2016-01-01

    Purpose: Spectral CT enabled by an energy-resolved photon-counting detector outperforms conventional CT in terms of material discrimination, contrast resolution, etc. One reconstruction method for spectral CT is to generate a color image from a reconstructed component in each energy channel. However, given the radiation dose, the number of photons in each channel is limited, which will result in strong noise in each channel and affect the final color reconstruction. Here we propose a novel dictionary learning method for spectral CT that combines dictionary-based sparse representation method and the patch based low-rank constraint to simultaneously improve the reconstruction in each channel and to address the inter-channel correlations to further improve the reconstruction. Methods: The proposed method has two important features: (1) guarantee of the patch based sparsity in each energy channel, which is the result of the dictionary based sparse representation constraint; (2) the explicit consideration of the correlations among different energy channels, which is realized by patch-by-patch nuclear norm-based low-rank constraint. For each channel, the dictionary consists of two sub-dictionaries. One is learned from the average of the images in all energy channels, and the other is learned from the average of the images in all energy channels except the current channel. With average operation to reduce noise, these two dictionaries can effectively preserve the structural details and get rid of artifacts caused by noise. Combining them together can express all structural information in current channel. Results: Dictionary learning based methods can obtain better results than FBP and the TV-based method. With low-rank constraint, the image quality can be further improved in the channel with more noise. The final color result by the proposed method has the best visual quality. Conclusion: The proposed method can effectively improve the image quality of low-dose spectral

  8. MO-DE-207A-05: Dictionary Learning Based Reconstruction with Low-Rank Constraint for Low-Dose Spectral CT

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Q [Xi’an Jiaotong University, Xi’an (China); Stanford University School of Medicine, Stanford, CA (United States); Liu, H; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Yu, H [University of Massachusetts Lowell, Lowell, MA (United States); Wang, G [Rensselaer Polytechnic Instute., Troy, NY (United States)

    2016-06-15

    Purpose: Spectral CT enabled by an energy-resolved photon-counting detector outperforms conventional CT in terms of material discrimination, contrast resolution, etc. One reconstruction method for spectral CT is to generate a color image from a reconstructed component in each energy channel. However, given the radiation dose, the number of photons in each channel is limited, which will result in strong noise in each channel and affect the final color reconstruction. Here we propose a novel dictionary learning method for spectral CT that combines dictionary-based sparse representation method and the patch based low-rank constraint to simultaneously improve the reconstruction in each channel and to address the inter-channel correlations to further improve the reconstruction. Methods: The proposed method has two important features: (1) guarantee of the patch based sparsity in each energy channel, which is the result of the dictionary based sparse representation constraint; (2) the explicit consideration of the correlations among different energy channels, which is realized by patch-by-patch nuclear norm-based low-rank constraint. For each channel, the dictionary consists of two sub-dictionaries. One is learned from the average of the images in all energy channels, and the other is learned from the average of the images in all energy channels except the current channel. With average operation to reduce noise, these two dictionaries can effectively preserve the structural details and get rid of artifacts caused by noise. Combining them together can express all structural information in current channel. Results: Dictionary learning based methods can obtain better results than FBP and the TV-based method. With low-rank constraint, the image quality can be further improved in the channel with more noise. The final color result by the proposed method has the best visual quality. Conclusion: The proposed method can effectively improve the image quality of low-dose spectral

  9. Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction

    Directory of Open Access Journals (Sweden)

    Chun-mei Li

    2016-01-01

    Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.

  10. Biogas engine performance estimation using ANN

    Directory of Open Access Journals (Sweden)

    Yusuf Kurtgoz

    2017-12-01

    Full Text Available Artificial neural network (ANN method was used to estimate the thermal efficiency (TE, brake specific fuel consumption (BSFC and volumetric efficiency (VE values of a biogas engine with spark ignition at different methane (CH4 ratios and engine load values. For this purpose, the biogas used in the biogas engine was produced by the anaerobic fermentation method from bovine manure and different CH4 contents (51%, 57%, 87% were obtained by purification of CO2 and H2S. The data used in the ANN models were obtained experimentally from a 4-stroke four-cylinder, spark ignition engine, at constant speed for different load and CH4 ratios. Using some of the obtained experimental data, ANN models were developed, and the rest was used to test the developed models. In the ANN models, the CH4 ratio of the fuel, engine load, inlet air temperature (Tin, air fuel ratio and the maximum cylinder pressure are chosen as the input parameters. TE, BSFC and VE are used as the output parameters. Root mean square error (RMSE, mean absolute percentage error (MAPE and correlation coefficient (R performance indicators are used to compare measured and predicted values. It has been shown that ANN models give good results in spark ignition biogas engines with high correlation and low error rates for TE, BSFC and VE values.

  11. On artefact-free reconstruction of low-energy (30–250 eV) electron holograms

    Energy Technology Data Exchange (ETDEWEB)

    Latychevskaia, Tatiana, E-mail: tatiana@physik.uzh.ch; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner

    2014-10-15

    Low-energy electrons (30–250 eV) have been successfully employed for imaging individual biomolecules. The most simple and elegant design of a low-energy electron microscope for imaging biomolecules is a lensless setup that operates in the holographic mode. In this work we address the problem associated with the reconstruction from the recorded holograms. We discuss the twin image problem intrinsic to inline holography and the problem of the so-called biprism-like effect specific to low-energy electrons. We demonstrate how the presence of the biprism-like effect can be efficiently identified and circumvented. The presented sideband filtering reconstruction method eliminates the twin image and allows for reconstruction despite the biprism-like effect, which we demonstrate on both, simulated and experimental examples. - Highlights: • Radiation damage-free imaging of individual biomolecules. • Elimination of the twin image in inline holograms. • Circumventing biprism-like effect in low-energy electron holograms. • Artefact-free reconstructions of low-energy electron holograms.

  12. On artefact-free reconstruction of low-energy (30–250 eV) electron holograms

    International Nuclear Information System (INIS)

    Latychevskaia, Tatiana; Longchamp, Jean-Nicolas; Escher, Conrad; Fink, Hans-Werner

    2014-01-01

    Low-energy electrons (30–250 eV) have been successfully employed for imaging individual biomolecules. The most simple and elegant design of a low-energy electron microscope for imaging biomolecules is a lensless setup that operates in the holographic mode. In this work we address the problem associated with the reconstruction from the recorded holograms. We discuss the twin image problem intrinsic to inline holography and the problem of the so-called biprism-like effect specific to low-energy electrons. We demonstrate how the presence of the biprism-like effect can be efficiently identified and circumvented. The presented sideband filtering reconstruction method eliminates the twin image and allows for reconstruction despite the biprism-like effect, which we demonstrate on both, simulated and experimental examples. - Highlights: • Radiation damage-free imaging of individual biomolecules. • Elimination of the twin image in inline holograms. • Circumventing biprism-like effect in low-energy electron holograms. • Artefact-free reconstructions of low-energy electron holograms

  13. Energy savings from housing: Ineffective renovation subsidies vs efficient demolition and reconstruction incentives

    International Nuclear Information System (INIS)

    Dubois, Maarten; Allacker, Karen

    2015-01-01

    Energy savings in the housing sector are key to reduce global greenhouse gas emissions. Policies to incentivize energy savings are however disparate between countries. Taking into account environmental aspects and consumer surplus, the paper uses a stylized economic model to assess the effectiveness and efficiency of three economic instruments: subsidies for renovation, subsidies for demolition and reconstruction projects and subsidies for building new houses on virgin land. The assessment also relates to differentiated value added taxes and other financial incentives such as green loans. In a counter-intuitive way, the model highlights that subsidies for renovations with minor energy gains worsen the overall energy consumption of housing due to the inducement of lock-ins with energy inefficient houses. Structural changes are needed in the use of policy instruments. First, commonly applied support schemes for renovations with minor energy savings should be abolished. Second, scarce public resources should incentivize deep renovation and demolition and reconstruction. Finally, taxes should apply on the use of virgin land to persuade households with a high willingness to pay for a new house, to invest in demolition and reconstruction. - Highlights: • Renovation subsidies worsen overall energy consumption of housing. • Renovation induces a lock-in with energy inefficient houses. • Renovation subsidies should be abolished or structurally reformed. • Policy should incentivize demolition and reconstruction projects. • Building on virgin land should be taxed.

  14. CrossRef Energy Reconstruction in a High Granularity Semi-Digital Hadronic Calorimeter for ILC Experiments

    CERN Document Server

    Mannai, S; Cortina, E; Laktineh, I

    2016-01-01

    Abstract: The Semi-Digital Hadronic CALorimeter (SDHCAL) is one of the two hadronic calorimeter options proposed by the International Large Detector (ILD) project for the future International Linear Collider (ILC) experiments. It is a sampling calorimeter with 48 active layers made of Glass Resistive Plate Chambers (GRPCs) and their embedded electronics. A fine lateral segmentation is obtained thanks to pickup pads of 1 cm2. This ensures the high granularity required for the application of the Particle Flow Algorithm (PFA) in order to improve the jet energy resolution in the ILC experiments. The performance of the SDHCAL technological prototype was tested successfully in several beam tests at CERN. The main point to be discussed here concerns the energy reconstruction in SDHCAL. Based on Monte Carlo simulation of the SDHCAL prototype using the GEANT4 package, we present different energy reconstruction methods to study the energy linearity and resolution of the detector response to single hadrons. In particula...

  15. Reconstruction of Time-Resolved Neutron Energy Spectra in Z-Pinch Experiments Using Time-of-flight Method

    International Nuclear Information System (INIS)

    Rezac, K.; Klir, D.; Kubes, P.; Kravarik, J.

    2009-01-01

    We present the reconstruction of neutron energy spectra from time-of-flight signals. This technique is useful in experiments with the time of neutron production in the range of about tens or hundreds of nanoseconds. The neutron signals were obtained by a common hard X-ray and neutron fast plastic scintillation detectors. The reconstruction is based on the Monte Carlo method which has been improved by simultaneous usage of neutron detectors placed on two opposite sides from the neutron source. Although the reconstruction from detectors placed on two opposite sides is more difficult and a little bit inaccurate (it followed from several presumptions during the inclusion of both sides of detection), there are some advantages. The most important advantage is smaller influence of scattered neutrons on the reconstruction. Finally, we describe the estimation of the error of this reconstruction.

  16. Obituary: Anne Barbara Underhill, 1920-2003

    Science.gov (United States)

    Roman, Nancy Grace

    2003-12-01

    Anne was born in Vancouver, British Columbia on 12 June 1920. Her parents were Frederic Clare Underhill, a civil engineer and Irene Anna (née Creery) Underhill. She had a twin brother and three younger brothers. As a young girl she was active in Girl Guides and graduated from high school winning the Lieutenant Governor's medal as one of the top students in the Province. She also excelled in high school sports. Her mother died when Anne was 18 and, while undertaking her university studies, Anne assisted in raising her younger brothers. Her twin brother was killed in Italy during World War II (1944), a loss that Anne felt deeply. Possibly because of fighting to get ahead in astronomy, a field overwhelming male when she started, she frequently appeared combative. At the University of British Columbia, Anne obtained a BA (honors) in Chemistry (1942), followed by a MA in 1944. After working for the NRC in Montreal for a year, she studied at the University of Toronto prior to entering the University of Chicago in 1946 to obtain her PhD. Her thesis was the first model computed for a multi-layered stellar atmosphere (1948). During this time she worked with Otto Struve, developing a lifetime interest in hot stars and the analysis of their high dispersion spectra. She received two fellowships from the University Women of Canada. She received a U.S. National Research Fellowship to work at the Copenhagen Observatory, and upon its completion, she returned to British Columbia to work at the Dominion Astrophysical Observatory as a research scientist from 1949--1962. During this period she spent a year at Harvard University as a visiting professor and at Princeton where she used their advanced computer to write the first code for modeling stellar atmospheres. Anne was invited to the University of Utrecht (Netherlands) as a full professor in 1962. She was an excellent teacher, well liked by the students in her classes, and by the many individuals that she guided throughout her

  17. Neural network model for proton-proton collision at high energy

    International Nuclear Information System (INIS)

    El-Bakry, M.Y.; El-Metwally, K.A.

    2003-01-01

    Developments in artificial intelligence (AI) techniques and their applications to physics have made it feasible to develop and implement new modeling techniques for high-energy interactions. In particular, AI techniques of artificial neural networks (ANN) have recently been used to design and implement more effective models. The primary purpose of this paper is to model the proton-proton (p-p) collision using the ANN technique. Following a review of the conventional techniques and an introduction to the neural network, the paper presents simulation test results using an p-p based ANN model trained with experimental data. The p-p based ANN model calculates the multiplicity distribution of charged particles and the inelastic cross section of the p-p collision at high energies. The results amply demonstrate the feasibility of such new technique in extracting the collision features and prove its effectiveness

  18. A track reconstructing low-latency trigger processor for high-energy physics

    International Nuclear Information System (INIS)

    Cuveland, Jan de

    2009-01-01

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 μs to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbit/s via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's drift chambers based on explicit value comparisons, calculates the momentum of the originating particles from the course of the reconstructed tracks, and finally leads to a trigger decision. The architecture is capable of processing up to 20 000 track segments in less than 2 μs with high detection efficiency and reconstruction precision for high-momentum particles. As a result, this thesis shows how a trigger processor performing complex online track reconstruction within tight real-time requirements can be realized. The presented hardware has been built and is in continuous data taking operation in the ALICE experiment. (orig.)

  19. A track reconstructing low-latency trigger processor for high-energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Cuveland, Jan de

    2009-09-17

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 {mu}s to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbit/s via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's drift chambers based on explicit value comparisons, calculates the momentum of the originating particles from the course of the reconstructed tracks, and finally leads to a trigger decision. The architecture is capable of processing up to 20 000 track segments in less than 2 {mu}s with high detection efficiency and reconstruction precision for high-momentum particles. As a result, this thesis shows how a trigger processor performing complex online track reconstruction within tight real-time requirements can be realized. The presented hardware has been built and is in continuous data taking operation in the ALICE experiment. (orig.)

  20. Information-theoretic discrepancy based iterative reconstructions (IDIR) for polychromatic x-ray tomography

    International Nuclear Information System (INIS)

    Jang, Kwang Eun; Lee, Jongha; Sung, Younghun; Lee, SeongDeok

    2013-01-01

    Purpose: X-ray photons generated from a typical x-ray source for clinical applications exhibit a broad range of wavelengths, and the interactions between individual particles and biological substances depend on particles' energy levels. Most existing reconstruction methods for transmission tomography, however, neglect this polychromatic nature of measurements and rely on the monochromatic approximation. In this study, we developed a new family of iterative methods that incorporates the exact polychromatic model into tomographic image recovery, which improves the accuracy and quality of reconstruction.Methods: The generalized information-theoretic discrepancy (GID) was employed as a new metric for quantifying the distance between the measured and synthetic data. By using special features of the GID, the objective function for polychromatic reconstruction which contains a double integral over the wavelength and the trajectory of incident x-rays was simplified to a paraboloidal form without using the monochromatic approximation. More specifically, the original GID was replaced with a surrogate function with two auxiliary, energy-dependent variables. Subsequently, the alternating minimization technique was applied to solve the double minimization problem. Based on the optimization transfer principle, the objective function was further simplified to the paraboloidal equation, which leads to a closed-form update formula. Numerical experiments on the beam-hardening correction and material-selective reconstruction were conducted to compare and assess the performance of conventional methods and the proposed algorithms.Results: The authors found that the GID determines the distance between its two arguments in a flexible manner. In this study, three groups of GIDs with distinct data representations were considered. The authors demonstrated that one type of GIDs that comprises “raw” data can be viewed as an extension of existing statistical reconstructions; under a

  1. Anneli Randla kaitses doktorikraadi Cambridge'is / Anneli Randla ; interv. Reet Varblane

    Index Scriptorium Estoniae

    Randla, Anneli, 1970-

    1999-01-01

    5. mail kaitses Cambridge'is esimese eesti kunstiteadlasena doktorikraadi Anneli Randla. Töö teema: kerjusmungaordukloostrite arhitektuur Põhja-Euroopas. Juhendaja dr. Deborah Howard. Doktorikraadile esitatavatest nõudmistest, doktoritöö kaitsmisest, magistrikraadi kaitsnu õppimisvõimalustest Cambridge's.

  2. Accelerated Compressed Sensing Based CT Image Reconstruction.

    Science.gov (United States)

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  3. Accelerated Compressed Sensing Based CT Image Reconstruction

    Directory of Open Access Journals (Sweden)

    SayedMasoud Hashemi

    2015-01-01

    Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  4. Prediction of Frequency for Simulation of Asphalt Mix Fatigue Tests Using MARS and ANN

    Directory of Open Access Journals (Sweden)

    Ali Reza Ghanizadeh

    2014-01-01

    Full Text Available Fatigue life of asphalt mixes in laboratory tests is commonly determined by applying a sinusoidal or haversine waveform with specific frequency. The pavement structure and loading conditions affect the shape and the frequency of tensile response pulses at the bottom of asphalt layer. This paper introduces two methods for predicting the loading frequency in laboratory asphalt fatigue tests for better simulation of field conditions. Five thousand (5000 four-layered pavement sections were analyzed and stress and strain response pulses in both longitudinal and transverse directions was determined. After fitting the haversine function to the response pulses by the concept of equal-energy pulse, the effective length of the response pulses were determined. Two methods including Multivariate Adaptive Regression Splines (MARS and Artificial Neural Network (ANN methods were then employed to predict the effective length (i.e., frequency of tensile stress and strain pulses in longitudinal and transverse directions based on haversine waveform. It is indicated that, under controlled stress and strain modes, both methods (MARS and ANN are capable of predicting the frequency of loading in HMA fatigue tests with very good accuracy. The accuracy of ANN method is, however, more than MARS method. It is furthermore shown that the results of the present study can be generalized to sinusoidal waveform by a simple equation.

  5. Land Degradation Monitoring in the Ordos Plateau of China Using an Expert Knowledge and BP-ANN-Based Approach

    Directory of Open Access Journals (Sweden)

    Yaojie Yue

    2016-11-01

    Full Text Available Land degradation monitoring is of vital importance to provide scientific information for promoting sustainable land utilization. This paper presents an expert knowledge and BP-ANN-based approach to detect and monitor land degradation in an effort to overcome the deficiencies of image classification and vegetation index-based approaches. The proposed approach consists of three generic steps: (1 extraction of knowledge on the relationship between land degradation degree and predisposing factors, which are NDVI and albedo, from domain experts; (2 establishment of a land degradation detecting model based on the BP-ANN algorithm; and (3 land degradation dynamic analysis. A comprehensive analysis was conducted on the development of land degradation in the Ordos Plateau of China in 1990, 2000 and 2010. The results indicate that the proposed approach is reliable for monitoring land degradation, with an overall accuracy of 91.2%. From 1990–2010, a reverse trend of land degradation is observed in Ordos Plateau. Regions with relatively high land degradation dynamic were mostly located in the northeast of Ordos Plateau. Additionally, most of the regions have transferred from a hot spot of land degradation to a less changed area. It is suggested that land utilization optimization plays a key role for effective land degradation control. However, it should be highlighted that the goals of such strategies should aim at the main negative factors causing land degradation, and the land use type and its quantity must meet the demand of population and be reconciled with natural conditions. Results from this case study suggest that the expert knowledge and BP-ANN-based approach is effective in mapping land degradation.

  6. Assessment of ANN and SVM models for estimating normal direct irradiation (H_b)

    International Nuclear Information System (INIS)

    Santos, Cícero Manoel dos; Escobedo, João Francisco; Teramoto, Érico Tadao; Modenese Gorla da Silva, Silvia Helena

    2016-01-01

    Highlights: • The performance of SVM and ANN in estimating Normal Direct Irradiation (H_b) was evaluated. • 12 models using different input variables are developed (hourly and daily partitions). • The most relevant input variables for DNI are kt, H_s_c and insolation ratio (r′ = n/N). • Support Vector Machine (SVM) provides accurate estimates and outperforms the Artificial Neural Network (ANN). - Abstract: This study evaluates the estimation of hourly and daily normal direct irradiation (H_b) using machine learning techniques (ML): Artificial Neural Network (ANN) and Support Vector Machine (SVM). Time series of different meteorological variables measured over thirteen years in Botucatu were used for training and validating ANN and SVM. Seven different sets of input variables were tested and evaluated, which were chosen based on statistical models reported in the literature. Relative Mean Bias Error (rMBE), Relative Root Mean Square Error (rRMSE), determination coefficient (R"2) and “d” Willmott index were used to evaluate ANN and SVM models. When compared to statistical models which use the same set of input variables (R"2 between 0.22 and 0.78), ANN and SVM show higher values of R"2 (hourly models between 0.52 and 0.88; daily models between 0.42 and 0.91). Considering the input variables, atmospheric transmissivity of global radiation (kt), integrated solar constant (H_s_c) and insolation ratio (n/N, n is sunshine duration and N is photoperiod) were the most relevant in ANN and SVM models. The rMBE and rRMSE values in the two time partitions of SVM models are lower than those obtained with ANN. Hourly ANN and SVM models have higher rRMSE values than daily models. Optimal performance with hourly models was obtained with ANN4"h (rMBE = 12.24%, rRMSE = 23.99% and “d” = 0.96) and SVM4"h (rMBE = 1.75%, rRMSE = 20.10% and “d” = 0.96). Optimal performance with daily models was obtained with ANN2"d (rMBE = −3.09%, rRMSE = 18.95% and “d” = 0

  7. Tau reconstruction, energy calibration and identification at ATLAS

    Indian Academy of Sciences (India)

    ... hadronically decaying tau leptons, as well as large suppression of fake candidates. A solid understanding of the combined performance of the calorimeter and tracking detectors is also required. We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC.

  8. Multifractal signal reconstruction based on singularity power spectrum

    International Nuclear Information System (INIS)

    Xiong, Gang; Yu, Wenxian; Xia, Wenxiang; Zhang, Shuning

    2016-01-01

    Highlights: • We propose a novel multifractal reconstruction method based on singularity power spectrum analysis (MFR-SPS). • The proposed MFR-SPS method has better power characteristic than the algorithm in Fraclab. • Further, the SPS-ISE algorithm performs better than the SPS-MFS algorithm. • Based on the proposed MFR-SPS method, we can restructure singularity white fractal noise (SWFN) and linear singularity modulation (LSM) multifractal signal, in equivalent sense, similar with the linear frequency modulation(LFM) signal and WGN in the Fourier domain. - Abstract: Fractal reconstruction (FR) and multifractal reconstruction (MFR) can be considered as the inverse problem of singularity spectrum analysis, and it is challenging to reconstruct fractal signal in accord with multifractal spectrum (MFS). Due to the multiple solutions of fractal reconstruction, the traditional methods of FR/MFR, such as FBM based method, wavelet based method, random wavelet series, fail to reconstruct fractal signal deterministically, and besides, those methods neglect the power spectral distribution in the singular domain. In this paper, we propose a novel MFR method based singularity power spectrum (SPS). Supposing the consistent uniform covering of multifractal measurement, we control the traditional power law of each scale of wavelet coefficients based on the instantaneous singularity exponents (ISE) or MFS, simultaneously control the singularity power law based on the SPS, and deduce the principle and algorithm of MFR based on SPS. Reconstruction simulation and error analysis of estimated ISE, MFS and SPS show the effectiveness and the improvement of the proposed methods compared to those obtained by the Fraclab package.

  9. Ann tuleb Rakverest Võrru

    Index Scriptorium Estoniae

    2009-01-01

    Võru kultuurimajas Kannel etendub 17. aprillil Rakvere teatri noortelavastus "Kuidas elad? ...Ann?!" Aidi Valliku jutustuse põhjal. Lavastaja Sven Heiberg. Mängivad ka Viljandi Kultuuriakadeemia teatritudengid

  10. Computer Based Road Accident Reconstruction Experiences

    Directory of Open Access Journals (Sweden)

    Milan Batista

    2005-03-01

    Full Text Available Since road accident analyses and reconstructions are increasinglybased on specific computer software for simulationof vehicle d1iving dynamics and collision dynamics, and forsimulation of a set of trial runs from which the model that bestdescribes a real event can be selected, the paper presents anoverview of some computer software and methods available toaccident reconstruction experts. Besides being time-saving,when properly used such computer software can provide moreauthentic and more trustworthy accident reconstruction, thereforepractical experiences while using computer software toolsfor road accident reconstruction obtained in the TransportSafety Laboratory at the Faculty for Maritime Studies andTransport of the University of Ljubljana are presented and discussed.This paper addresses also software technology for extractingmaximum information from the accident photo-documentationto support accident reconstruction based on the simulationsoftware, as well as the field work of reconstruction expertsor police on the road accident scene defined by this technology.

  11. Reconstruction of railway energy lines; Rekonstruktion von Bahnenergieleitungen

    Energy Technology Data Exchange (ETDEWEB)

    Rothe, Matthias [DB Energie Gmbh, Berlin (Germany); Wahlen, Manfred [DB Energie Gmbh, Koeln (Germany)

    2013-06-15

    As is the case for other overhead lines, 110 kV railway energy lines are assumed to have a service life of 80 years. When this service life ends or the lines are adapted to the state of the art as necessary, all components of the overhead line are renewed. For the reconstruction of railway energy lines, the specifications laid down in planning and environmental laws, operational management aspects and the local ambient and development situation are decisive project planning parameters. (orig.)

  12. Modeling of an Aged Porous Silicon Humidity Sensor Using ANN Technique

    Directory of Open Access Journals (Sweden)

    Tarikul ISLAM

    2006-10-01

    Full Text Available Porous silicon (PS sensor based on capacitive technique used for measuring relative humidity has the advantages of low cost, ease of fabrication with controlled structure and CMOS compatibility. But the response of the sensor is nonlinear function of humidity and suffers from errors due to aging and stability. One adaptive linear (ADALINE ANN model has been developed to model the behavior of the sensor with a view to estimate these errors and compensate them. The response of the sensor is represented by third order polynomial basis function whose coefficients are determined by the ANN technique. The drift in sensor output due to aging of PS layer is also modeled by adapting the weights of the polynomial function. ANN based modeling is found to be more suitable than conventional physical modeling of PS humidity sensor in changing environment and drift due to aging. It helps online estimation of nonlinearity as well as monitoring of the fault of the PS humidity sensor using the coefficients of the model.

  13. Theory Study and Application of the BP-ANN Method for Power Grid Short-Term Load Forecasting

    Institute of Scientific and Technical Information of China (English)

    Xia Hua; Gang Zhang; Jiawei Yang; Zhengyuan Li

    2015-01-01

    Aiming at the low accuracy problem of power system short⁃term load forecasting by traditional methods, a back⁃propagation artifi⁃cial neural network (BP⁃ANN) based method for short⁃term load forecasting is presented in this paper. The forecast points are re⁃lated to prophase adjacent data as well as the periodical long⁃term historical load data. Then the short⁃term load forecasting model of Shanxi Power Grid (China) based on BP⁃ANN method and correlation analysis is established. The simulation model matches well with practical power system load, indicating the BP⁃ANN method is simple and with higher precision and practicality.

  14. Anne Veski : "Ju siis ei ole minu rahvusvaheline kuulsus meie presidendi kõrvu jõudnud" / Anne Veski ; interv. Tiia Linnard

    Index Scriptorium Estoniae

    Veski, Anne, 1956-

    2008-01-01

    Laulja Anne Veski arutlusi kontserttegevusest Venemaal ja elust Eestis. Muuhulgas on juttu ka sellest, et Anne Veskit pole kunagi kutsutud presidendi iseseisvuspäeva vastuvõtule. Ilmunud ka: Severnoje Poberezhje 20. märts 2008, lk. 6

  15. Reconstructing missing daily precipitation data using regression trees and artificial neural networks

    Science.gov (United States)

    Incomplete meteorological data has been a problem in environmental modeling studies. The objective of this work was to develop a technique to reconstruct missing daily precipitation data in the central part of Chesapeake Bay Watershed using regression trees (RT) and artificial neural networks (ANN)....

  16. Mary Anne Chambers | IDRC - International Development Research ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    A former Member of Provincial Parliament, Mary Anne served as Minister of Training, Colleges and Universities, and Minister of Children and Youth Services in the Government of Ontario. She is also a former senior vice-president of Scotiabank. A graduate of the University of Toronto, Mary Anne has received honorary ...

  17. Porous media microstructure reconstruction using pixel-based and object-based simulated annealing: comparison with other reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)

    2008-07-01

    The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)

  18. Incomplete projection reconstruction of computed tomography based on the modified discrete algebraic reconstruction technique

    Science.gov (United States)

    Yang, Fuqiang; Zhang, Dinghua; Huang, Kuidong; Gao, Zongzhao; Yang, YaFei

    2018-02-01

    Based on the discrete algebraic reconstruction technique (DART), this study aims to address and test a new improved algorithm applied to incomplete projection data to generate a high quality reconstruction image by reducing the artifacts and noise in computed tomography. For the incomplete projections, an augmented Lagrangian based on compressed sensing is first used in the initial reconstruction for segmentation of the DART to get higher contrast graphics for boundary and non-boundary pixels. Then, the block matching 3D filtering operator was used to suppress the noise and to improve the gray distribution of the reconstructed image. Finally, simulation studies on the polychromatic spectrum were performed to test the performance of the new algorithm. Study results show a significant improvement in the signal-to-noise ratios (SNRs) and average gradients (AGs) of the images reconstructed from incomplete data. The SNRs and AGs of the new images reconstructed by DART-ALBM were on average 30%-40% and 10% higher than the images reconstructed by DART algorithms. Since the improved DART-ALBM algorithm has a better robustness to limited-view reconstruction, which not only makes the edge of the image clear but also makes the gray distribution of non-boundary pixels better, it has the potential to improve image quality from incomplete projections or sparse projections.

  19. Tensor-based dictionary learning for dynamic tomographic reconstruction

    International Nuclear Information System (INIS)

    Tan, Shengqi; Wu, Zhifang; Zhang, Yanbo; Mou, Xuanqin; Wang, Ge; Cao, Guohua; Yu, Hengyong

    2015-01-01

    In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. (paper)

  20. Tensor-based Dictionary Learning for Dynamic Tomographic Reconstruction

    Science.gov (United States)

    Tan, Shengqi; Zhang, Yanbo; Wang, Ge; Mou, Xuanqin; Cao, Guohua; Wu, Zhifang; Yu, Hengyong

    2015-01-01

    In dynamic computed tomography (CT) reconstruction, the data acquisition speed limits the spatio-temporal resolution. Recently, compressed sensing theory has been instrumental in improving CT reconstruction from far few-view projections. In this paper, we present an adaptive method to train a tensor-based spatio-temporal dictionary for sparse representation of an image sequence during the reconstruction process. The correlations among atoms and across phases are considered to capture the characteristics of an object. The reconstruction problem is solved by the alternating direction method of multipliers. To recover fine or sharp structures such as edges, the nonlocal total variation is incorporated into the algorithmic framework. Preclinical examples including a sheep lung perfusion study and a dynamic mouse cardiac imaging demonstrate that the proposed approach outperforms the vectorized dictionary-based CT reconstruction in the case of few-view reconstruction. PMID:25779991

  1. Short-term load forecast using trend information and process reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Santos, P.J.; Pires, A.J.; Martins, J.F. [Instituto Politecnico de Setubal (Portugal). Dept. of Electrical Engineering; Martins, A.G. [University of Coimbra (Portugal). Dept. of Electrical Engineering; Mendes, R.V. [Instituto Superior Tecnico, Lisboa (Portugal). Laboratorio de Mecatronica

    2005-07-01

    The algorithms for short-term load forecast (STLF), especially within the next-hour horizon, belong to a group of methodologies that aim to render more effective the actions of planning, operating and controlling electric energy systems (EES). In the context of the progressive liberalization of the electricity sector, unbundling of the previous monopolistic structure emphasizes the need for load forecast, particularly at the network level. Methodologies such as artificial neural networks (ANN) have been widely used in next-hour load forecast. Designing an ANN requires the proper choice of input variables, avoiding overfitting and an unnecessarily complex input vector (IV). This may be achieved by trying to reduce the arbitrariness in the choice of endogenous variables. At a first stage, we have applied the mathematical techniques of process-reconstruction to the underlying stochastic process, using coding and block entropies to characterize the measure and memory range. At a second stage, the concept of consumption trend in homologous days of previous weeks has been used. The possibility to include weather-related variables in the IV has also been analysed, the option finally being to establish a model of the non-weather sensitive type. The paper uses a real-life case study. (author)

  2. ANN Model-Based Simulation of the Runoff Variation in Response to Climate Change on the Qinghai-Tibet Plateau, China

    Directory of Open Access Journals (Sweden)

    Chang Juan

    2017-01-01

    Full Text Available Precisely quantitative assessments of stream flow response to climatic change and permafrost thawing are highly challenging and urgent in cold regions. However, due to the notably harsh environmental conditions, there is little field monitoring data of runoff in permafrost regions, which has limited the development of physically based models in these regions. To identify the impacts of climate change in the runoff process in the Three-River Headwater Region (TRHR on the Qinghai-Tibet Plateau, two artificial neural network (ANN models, one with three input variables (previous runoff, air temperature, and precipitation and another with two input variables (air temperature and precipitation only, were developed to simulate and predict the runoff variation in the TRHR. The results show that the three-input variable ANN model has a superior real-time prediction capability and performs well in the simulation and forecasting of the runoff variation in the TRHR. Under the different scenarios conditions, the forecasting results of ANN model indicated that climate change has a great effect on the runoff processes in the TRHR. The results of this study are of practical significance for water resources management and the evaluation of the impacts of climatic change on the hydrological regime in long-term considerations.

  3. Application of ANN-SCE model on the evaluation of automatic generation control performance

    Energy Technology Data Exchange (ETDEWEB)

    Chang-Chien, L.R.; Lo, C.S.; Lee, K.S. [National Cheng Kung Univ., Tainan, Taiwan (China)

    2005-07-01

    An accurate evaluation of load frequency control (LFC) performance is needed to balance minute-to-minute electricity generation and demand. In this study, an artificial neural network-based system control error (ANN-SCE) model was used to assess the performance of automatic generation controls (AGC). The model was used to identify system dynamics for control references in supplementing AGC logic. The artificial neural network control error model was used to track a single area's LFC dynamics in Taiwan. The model was used to gauge the impacts of regulation control. Results of the training, evaluating, and projecting processes showed that the ANN-SCE model could be algebraically decomposed into components corresponding to different impact factors. The SCE information obtained from testing of various AGC gains provided data for the creation of a new control approach. The ANN-SCE model was used in conjunction with load forecasting and scheduled generation data to create an ANN-SCE identifier. The model successfully simulated SCE dynamics. 13 refs., 10 figs.

  4. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  5. Ann Modeling for Grey Particles Produced from Interactions of Different Projectiles with Emulsion Nuclei at 4.5 AGEV/C

    International Nuclear Information System (INIS)

    El-Bakry, M.N.Y.; Basha, A.M.; Rashed, N.; Mahmoud, M.A.; Radi, A.

    2008-01-01

    Artificial Neural Network (ANN) is one of the important tools in high energy physics. In this paper, we are using ANN for modeling the multiplicity distributions of grey particles produced from interactions of P, 3 He, 4 He, 6 Li, 12 C, 24 Mg, and 32 S with emulsion nuclei, light nuclei (CNO), and heavy nuclei (Ag Br). The equations of these distributions were obtained

  6. An ANN application for water quality forecasting.

    Science.gov (United States)

    Palani, Sundarambal; Liong, Shie-Yui; Tkalich, Pavel

    2008-09-01

    Rapid urban and coastal developments often witness deterioration of regional seawater quality. As part of the management process, it is important to assess the baseline characteristics of the marine environment so that sustainable development can be pursued. In this study, artificial neural networks (ANNs) were used to predict and forecast quantitative characteristics of water bodies. The true power and advantage of this method lie in its ability to (1) represent both linear and non-linear relationships and (2) learn these relationships directly from the data being modeled. The study focuses on Singapore coastal waters. The ANN model is built for quick assessment and forecasting of selected water quality variables at any location in the domain of interest. Respective variables measured at other locations serve as the input parameters. The variables of interest are salinity, temperature, dissolved oxygen, and chlorophyll-alpha. A time lag up to 2Delta(t) appeared to suffice to yield good simulation results. To validate the performance of the trained ANN, it was applied to an unseen data set from a station in the region. The results show the ANN's great potential to simulate water quality variables. Simulation accuracy, measured in the Nash-Sutcliffe coefficient of efficiency (R(2)), ranged from 0.8 to 0.9 for the training and overfitting test data. Thus, a trained ANN model may potentially provide simulated values for desired locations at which measured data are unavailable yet required for water quality models.

  7. Development of a thermal control algorithm using artificial neural network models for improved thermal comfort and energy efficiency in accommodation buildings

    International Nuclear Information System (INIS)

    Moon, Jin Woo; Jung, Sung Kwon

    2016-01-01

    Highlights: • An ANN model for predicting optimal start moment of the cooling system was developed. • An ANN model for predicting the amount of cooling energy consumption was developed. • An optimal control algorithm was developed employing two ANN models. • The algorithm showed the advanced thermal comfort and energy efficiency. - Abstract: The aim of this study was to develop a control algorithm to demonstrate the improved thermal comfort and building energy efficiency of accommodation buildings in the cooling season. For this, two artificial neural network (ANN)-based predictive and adaptive models were developed and employed in the algorithm. One model predicted the cooling energy consumption during the unoccupied period for different setback temperatures and the other predicted the time required for restoring current indoor temperature to the normal set-point temperature. Using numerical simulation methods, the prediction accuracy of the two ANN models and the performance of the algorithm were tested. Through the test result analysis, the two ANN models showed their prediction accuracy with an acceptable error rate when applied in the control algorithm. In addition, the two ANN models based algorithm can be used to provide a more comfortable and energy efficient indoor thermal environment than the two conventional control methods, which respectively employed a fixed set-point temperature for the entire day and a setback temperature during the unoccupied period. Therefore, the operating range was 23–26 °C during the occupied period and 25–28 °C during the unoccupied period. Based on the analysis, it can be concluded that the optimal algorithm with two predictive and adaptive ANN models can be used to design a more comfortable and energy efficient indoor thermal environment for accommodation buildings in a comprehensive manner.

  8. Support vector machine regression (LS-SVM)--an alternative to artificial neural networks (ANNs) for the analysis of quantum chemistry data?

    Science.gov (United States)

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-06-28

    A multilayer feed-forward artificial neural network (MLP-ANN) with a single, hidden layer that contains a finite number of neurons can be regarded as a universal non-linear approximator. Today, the ANN method and linear regression (MLR) model are widely used for quantum chemistry (QC) data analysis (e.g., thermochemistry) to improve their accuracy (e.g., Gaussian G2-G4, B3LYP/B3-LYP, X1, or W1 theoretical methods). In this study, an alternative approach based on support vector machines (SVMs) is used, the least squares support vector machine (LS-SVM) regression. It has been applied to ab initio (first principle) and density functional theory (DFT) quantum chemistry data. So, QC + SVM methodology is an alternative to QC + ANN one. The task of the study was to estimate the Møller-Plesset (MPn) or DFT (B3LYP, BLYP, BMK) energies calculated with large basis sets (e.g., 6-311G(3df,3pd)) using smaller ones (6-311G, 6-311G*, 6-311G**) plus molecular descriptors. A molecular set (BRM-208) containing a total of 208 organic molecules was constructed and used for the LS-SVM training, cross-validation, and testing. MP2, MP3, MP4(DQ), MP4(SDQ), and MP4/MP4(SDTQ) ab initio methods were tested. Hartree-Fock (HF/SCF) results were also reported for comparison. Furthermore, constitutional (CD: total number of atoms and mole fractions of different atoms) and quantum-chemical (QD: HOMO-LUMO gap, dipole moment, average polarizability, and quadrupole moment) molecular descriptors were used for the building of the LS-SVM calibration model. Prediction accuracies (MADs) of 1.62 ± 0.51 and 0.85 ± 0.24 kcal mol(-1) (1 kcal mol(-1) = 4.184 kJ mol(-1)) were reached for SVM-based approximations of ab initio and DFT energies, respectively. The LS-SVM model was more accurate than the MLR model. A comparison with the artificial neural network approach shows that the accuracy of the LS-SVM method is similar to the accuracy of ANN. The extrapolation and interpolation results show that LS-SVM is

  9. Systematic Review: Aesthetic Assessment of Breast Reconstruction Outcomes by Healthcare Professionals.

    Science.gov (United States)

    Maass, Saskia W M C; Bagher, Shaghayegh; Hofer, Stefan O P; Baxter, Nancy N; Zhong, Toni

    2015-12-01

    Achieving an aesthetic outcome following postmastectomy breast reconstruction is both an important goal for the patient and plastic surgeon. However, there is currently an absence of a widely accepted, standardized, and validated professional aesthetic assessment scale following postmastectomy breast reconstruction. A systematic review was performed to identify all articles that provided professional assessment of the aesthetic outcome following postmastectomy, implant- or autologous tissue-based breast reconstruction. A modified version of the Scientific Advisory Committee's Medical Outcomes Trust (MOT) criteria was used to evaluate all professional aesthetic assessment scales identified by our systematic review. The criteria included conceptual framework formation, reliability, validity, responsiveness, interpretability, burden, and correlation with patient-reported outcomes. A total of 120 articles were identified: 52 described autologous breast reconstruction, 37 implant-based reconstruction, and 29 both. Of the 12 different professional aesthetic assessment scales that exist in the literature, the most commonly used scale was the four-point professional aesthetic assessment scale. The highest score on the modified MOT criteria was assigned to the ten-point professional aesthetic assessment scale. However, this scale has limited clinical usefulness due to its poor responsiveness to change, lack of interpretability, and wide range of intra- and inter-rater agreements (Veiga et al. in Ann Plast Surg 48(5):515-520, 2002). A "gold standard" professional aesthetic assessment scale needs to be developed to enhance the comparability of breast reconstruction results across techniques, surgeons, and studies to aid with the selection of procedures that produce the best aesthetic results from both the perspectives of the surgeon and patients.

  10. Prediction of Film Cooling Effectiveness on a Gas Turbine Blade Leading Edge Using ANN and CFD

    Science.gov (United States)

    Dávalos, J. O.; García, J. C.; Urquiza, G.; Huicochea, A.; De Santiago, O.

    2018-05-01

    In this work, the area-averaged film cooling effectiveness (AAFCE) on a gas turbine blade leading edge was predicted by employing an artificial neural network (ANN) using as input variables: hole diameter, injection angle, blowing ratio, hole and columns pitch. The database used to train the network was built using computational fluid dynamics (CFD) based on a two level full factorial design of experiments. The CFD numerical model was validated with an experimental rig, where a first stage blade of a gas turbine was represented by a cylindrical specimen. The ANN architecture was composed of three layers with four neurons in hidden layer and Levenberg-Marquardt was selected as ANN optimization algorithm. The AAFCE was successfully predicted by the ANN with a regression coefficient R2<0.99 and a root mean square error RMSE=0.0038. The ANN weight coefficients were used to estimate the relative importance of the input parameters. Blowing ratio was the most influential parameter with relative importance of 40.36 % followed by hole diameter. Additionally, by using the ANN model, the relationship between input parameters was analyzed.

  11. NEW FERMI-LAT EVENT RECONSTRUCTION REVEALS MORE HIGH-ENERGY GAMMA RAYS FROM GAMMA-RAY BURSTS

    Energy Technology Data Exchange (ETDEWEB)

    Atwood, W. B. [Santa Cruz Institute for Particle Physics, Department of Physics and Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Baldini, L. [Universita di Pisa and Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Bregeon, J.; Pesce-Rollins, M.; Sgro, C.; Tinivella, M. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Bruel, P. [Laboratoire Leprince-Ringuet, Ecole polytechnique, CNRS/IN2P3, Palaiseau (France); Chekhtman, A. [Center for Earth Observing and Space Research, College of Science, George Mason University, Fairfax, VA 22030 (United States); Cohen-Tanugi, J. [Laboratoire Univers et Particules de Montpellier, Universite Montpellier 2, CNRS/IN2P3, F-34095 Montpellier (France); Drlica-Wagner, A.; Omodei, N.; Rochester, L. S.; Usher, T. L. [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Granot, J. [Department of Natural Sciences, The Open University of Israel, 1 University Road, P.O. Box 808, Ra' anana 43537 (Israel); Longo, F. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Razzaque, S. [Department of Physics, University of Johannesburg, Auckland Park 2006 (South Africa); Zimmer, S., E-mail: melissa.pesce.rollins@pi.infn.it, E-mail: nicola.omodei@stanford.edu, E-mail: granot@openu.ac.il [Department of Physics, Stockholm University, AlbaNova, SE-106 91 Stockholm (Sweden)

    2013-09-01

    Based on the experience gained during the four and a half years of the mission, the Fermi-LAT Collaboration has undertaken a comprehensive revision of the event-level analysis going under the name of Pass 8. Although it is not yet finalized, we can test the improvements in the new event reconstruction with the special case of the prompt phase of bright gamma-ray bursts (GRBs), where the signal-to-noise ratio is large enough that loose selection cuts are sufficient to identify gamma rays associated with the source. Using the new event reconstruction, we have re-analyzed 10 GRBs previously detected by the Large Area Telescope (LAT) for which an X-ray/optical follow-up was possible and found four new gamma rays with energies greater than 10 GeV in addition to the seven previously known. Among these four is a 27.4 GeV gamma ray from GRB 080916C, which has a redshift of 4.35, thus making it the gamma ray with the highest intrinsic energy ({approx}147 GeV) detected from a GRB. We present here the salient aspects of the new event reconstruction and discuss the scientific implications of these new high-energy gamma rays, such as constraining extragalactic background light models, Lorentz invariance violation tests, the prompt emission mechanism, and the bulk Lorentz factor of the emitting region.

  12. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  13. Implementation of ANN on CCHP system to predict trigeneration performance with consideration of various operative factors

    International Nuclear Information System (INIS)

    Anvari, Simin; Taghavifar, Hadi; Saray, Rahim Khoshbakhti; Khalilarya, Shahram; Jafarmadar, Samad

    2015-01-01

    Highlights: • ANN modeling tool was implemented on the CCHP system. • The best ANN topology was detected 10–8–9 with Levenberg–Marquadt algorithm. • The system is more sensitive of CC outlet temperature and turbine isentropic efficiency. • The lowest RMSE = 3.13e−5 and the best R 2 = 0.999 is related to lambda and second law efficiency terms, respectively. - Abstract: A detailed investigation was aimed based on numerical thermodynamic survey and artificial neural network (ANN) modeling of the trigeneration system. The results are presented in two pivotal frameworks namely the sensitivity analysis and ANN prediction capability of proposed modeling. The underlying operative parameters were chosen as input parameters from different cycles and components, while the exergy efficiency, exergy loss, coefficient of performance, heating load exergy, lambda, gas turbine power, exergy destruction, actual outlet air compressor temperature, and heat recovery gas steam generator (HRSG) outlet temperature were taken as objective output parameters for the modeling purpose. Up to now, no significant step was taken to investigate the compound power plant with thermodynamic analyses and network predictability hybrid in such a detailed oriented approach. It follows that multilayer perceptron neural network with back propagation algorithm deployed with 10–8–9 configuration results in the modeling reliability ranged within R 2 = 0.995–0.999. When dataset treated with trainlm learning algorithm and diversified neurons, the mean square error (MSE) is obtained equal to 0.2175. This denotes a powerful modeling achievement in both scientific and industrial scale to save considerable computational cost on combined cooling, heating, and power system in accomplishment of boosting the energy efficiency and system maintenance

  14. Interval-based reconstruction for uncertainty quantification in PET

    Science.gov (United States)

    Kucharczak, Florentin; Loquin, Kevin; Buvat, Irène; Strauss, Olivier; Mariano-Goulart, Denis

    2018-02-01

    A new directed interval-based tomographic reconstruction algorithm, called non-additive interval based expectation maximization (NIBEM) is presented. It uses non-additive modeling of the forward operator that provides intervals instead of single-valued projections. The detailed approach is an extension of the maximum likelihood—expectation maximization algorithm based on intervals. The main motivation for this extension is that the resulting intervals have appealing properties for estimating the statistical uncertainty associated with the reconstructed activity values. After reviewing previously published theoretical concepts related to interval-based projectors, this paper describes the NIBEM algorithm and gives examples that highlight the properties and advantages of this interval valued reconstruction.

  15. Comparison of Conventional and ANN Models for River Flow Forecasting

    Science.gov (United States)

    Jain, A.; Ganti, R.

    2011-12-01

    Hydrological models are useful in many water resources applications such as flood control, irrigation and drainage, hydro power generation, water supply, erosion and sediment control, etc. Estimates of runoff are needed in many water resources planning, design development, operation and maintenance activities. River flow is generally estimated using time series or rainfall-runoff models. Recently, soft artificial intelligence tools such as Artificial Neural Networks (ANNs) have become popular for research purposes but have not been extensively adopted in operational hydrological forecasts. There is a strong need to develop ANN models based on real catchment data and compare them with the conventional models. In this paper, a comparative study has been carried out for river flow forecasting using the conventional and ANN models. Among the conventional models, multiple linear, and non linear regression, and time series models of auto regressive (AR) type have been developed. Feed forward neural network model structure trained using the back propagation algorithm, a gradient search method, was adopted. The daily river flow data derived from Godavari Basin @ Polavaram, Andhra Pradesh, India have been employed to develop all the models included here. Two inputs, flows at two past time steps, (Q(t-1) and Q(t-2)) were selected using partial auto correlation analysis for forecasting flow at time t, Q(t). A wide range of error statistics have been used to evaluate the performance of all the models developed in this study. It has been found that the regression and AR models performed comparably, and the ANN model performed the best amongst all the models investigated in this study. It is concluded that ANN model should be adopted in real catchments for hydrological modeling and forecasting.

  16. Energy optimization and prediction of complex petrochemical industries using an improved artificial neural network approach integrating data envelopment analysis

    International Nuclear Information System (INIS)

    Han, Yong-Ming; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-01-01

    Graphical abstract: This paper proposed an energy optimization and prediction of complex petrochemical industries based on a DEA-integrated ANN approach (DEA-ANN). The proposed approach utilizes the DEA model with slack variables for sensitivity analysis to determine the effective decision making units (DMUs) and indicate the optimized direction of the ineffective DMUs. Compared with the traditional ANN approach, the DEA-ANN prediction model is effectively verified by executing a linear comparison between all DMUs and the effective DMUs through the standard data source from the UCI (University of California at Irvine) repository. Finally, the proposed model is validated through an application in a complex ethylene production system of China petrochemical industry. Meanwhile, the optimization result and the prediction value are obtained to reduce energy consumption of the ethylene production system, guide ethylene production and improve energy efficiency. - Highlights: • The DEA-integrated ANN approach is proposed. • The DEA-ANN prediction model is effectively verified through the standard data source from the UCI repository. • The energy optimization and prediction framework of complex petrochemical industries based on the proposed method is obtained. • The proposed method is valid and efficient in improvement of energy efficiency in complex petrochemical plants. - Abstract: Since the complex petrochemical data have characteristics of multi-dimension, uncertainty and noise, it is difficult to accurately optimize and predict the energy usage of complex petrochemical systems. Therefore, this paper proposes a data envelopment analysis (DEA) integrated artificial neural network (ANN) approach (DEA-ANN). The proposed approach utilizes the DEA model with slack variables for sensitivity analysis to determine the effective decision making units (DMUs) and indicate the optimized direction of the ineffective DMUs. Compared with the traditional ANN approach, the DEA-ANN

  17. Fast GPU-based Monte Carlo code for SPECT/CT reconstructions generates improved 177Lu images.

    Science.gov (United States)

    Rydén, T; Heydorn Lagerlöf, J; Hemmingsson, J; Marin, I; Svensson, J; Båth, M; Gjertsson, P; Bernhardt, P

    2018-01-04

    Full Monte Carlo (MC)-based SPECT reconstructions have a strong potential for correcting for image degrading factors, but the reconstruction times are long. The objective of this study was to develop a highly parallel Monte Carlo code for fast, ordered subset expectation maximum (OSEM) reconstructions of SPECT/CT images. The MC code was written in the Compute Unified Device Architecture language for a computer with four graphics processing units (GPUs) (GeForce GTX Titan X, Nvidia, USA). This enabled simulations of parallel photon emissions from the voxels matrix (128 3 or 256 3 ). Each computed tomography (CT) number was converted to attenuation coefficients for photo absorption, coherent scattering, and incoherent scattering. For photon scattering, the deflection angle was determined by the differential scattering cross sections. An angular response function was developed and used to model the accepted angles for photon interaction with the crystal, and a detector scattering kernel was used for modeling the photon scattering in the detector. Predefined energy and spatial resolution kernels for the crystal were used. The MC code was implemented in the OSEM reconstruction of clinical and phantom 177 Lu SPECT/CT images. The Jaszczak image quality phantom was used to evaluate the performance of the MC reconstruction in comparison with attenuated corrected (AC) OSEM reconstructions and attenuated corrected OSEM reconstructions with resolution recovery corrections (RRC). The performance of the MC code was 3200 million photons/s. The required number of photons emitted per voxel to obtain a sufficiently low noise level in the simulated image was 200 for a 128 3 voxel matrix. With this number of emitted photons/voxel, the MC-based OSEM reconstruction with ten subsets was performed within 20 s/iteration. The images converged after around six iterations. Therefore, the reconstruction time was around 3 min. The activity recovery for the spheres in the Jaszczak phantom was

  18. An ANN-based approach to predict blast-induced ground vibration of Gol-E-Gohar iron ore mine, Iran

    Directory of Open Access Journals (Sweden)

    Mahdi Saadat

    2014-02-01

    Full Text Available Blast-induced ground vibration is one of the inevitable outcomes of blasting in mining projects and may cause substantial damage to rock mass as well as nearby structures and human beings. In this paper, an attempt has been made to present an application of artificial neural network (ANN to predict the blast-induced ground vibration of the Gol-E-Gohar (GEG iron mine, Iran. A four-layer feed-forward back propagation multi-layer perceptron (MLP was used and trained with Levenberg–Marquardt algorithm. To construct ANN models, the maximum charge per delay, distance from blasting face to monitoring point, stemming and hole depth were taken as inputs, whereas peak particle velocity (PPV was considered as an output parameter. A database consisting of 69 data sets recorded at strategic and vulnerable locations of GEG iron mine was used to train and test the generalization capability of ANN models. Coefficient of determination (R2 and mean square error (MSE were chosen as the indicators of the performance of the networks. A network with architecture 4-11-5-1 and R2 of 0.957 and MSE of 0.000722 was found to be optimum. To demonstrate the supremacy of ANN approach, the same 69 data sets were used for the prediction of PPV with four common empirical models as well as multiple linear regression (MLR analysis. The results revealed that the proposed ANN approach performs better than empirical and MLR models.

  19. Application of a Laplace transform pair model for high-energy x-ray spectral reconstruction.

    Science.gov (United States)

    Archer, B R; Almond, P R; Wagner, L K

    1985-01-01

    A Laplace transform pair model, previously shown to accurately reconstruct x-ray spectra at diagnostic energies, has been applied to megavoltage energy beams. The inverse Laplace transforms of 2-, 6-, and 25-MV attenuation curves were evaluated to determine the energy spectra of these beams. The 2-MV data indicate that the model can reliably reconstruct spectra in the low megavoltage range. Experimental limitations in acquiring the 6-MV transmission data demonstrate the sensitivity of the model to systematic experimental error. The 25-MV data result in a physically realistic approximation of the present spectrum.

  20. Improved free-energy landscape reconstruction of bacteriorhodopsin highlights local variations in unfolding energy.

    Science.gov (United States)

    Heenan, Patrick R; Yu, Hao; Siewny, Matthew G W; Perkins, Thomas T

    2018-03-28

    Precisely quantifying the energetics that drive the folding of membrane proteins into a lipid bilayer remains challenging. More than 15 years ago, atomic force microscopy (AFM) emerged as a powerful tool to mechanically extract individual membrane proteins from a lipid bilayer. Concurrently, fluctuation theorems, such as the Jarzynski equality, were applied to deduce equilibrium free energies (ΔG 0 ) from non-equilibrium single-molecule force spectroscopy records. The combination of these two advances in single-molecule studies deduced the free-energy of the model membrane protein bacteriorhodopsin in its native lipid bilayer. To elucidate this free-energy landscape at a higher resolution, we applied two recent developments. First, as an input to the reconstruction, we used force-extension curves acquired with a 100-fold higher time resolution and 10-fold higher force precision than traditional AFM studies of membrane proteins. Next, by using an inverse Weierstrass transform and the Jarzynski equality, we removed the free energy associated with the force probe and determined the molecular free-energy landscape of the molecule under study, bacteriorhodopsin. The resulting landscape yielded an average unfolding free energy per amino acid (aa) of 1.0 ± 0.1 kcal/mol, in agreement with past single-molecule studies. Moreover, on a smaller spatial scale, this high-resolution landscape also agreed with an equilibrium measurement of a particular three-aa transition in bacteriorhodopsin that yielded 2.7 kcal/mol/aa, an unexpectedly high value. Hence, while average unfolding ΔG 0 per aa is a useful metric, the derived high-resolution landscape details significant local variation from the mean. More generally, we demonstrated that, as anticipated, the inverse Weierstrass transform is an efficient means to reconstruct free-energy landscapes from AFM data.

  1. Development of surrogate models using artificial neural network for building shell energy labelling

    International Nuclear Information System (INIS)

    Melo, A.P.; Cóstola, D.; Lamberts, R.; Hensen, J.L.M.

    2014-01-01

    Surrogate models are an important part of building energy labelling programs, but these models still present low accuracy, particularly in cooling-dominated climates. The objective of this study was to evaluate the feasibility of using an artificial neural network (ANN) to improve the accuracy of surrogate models for labelling purposes. An ANN was applied to model the building stock of a city in Brazil, based on the results of extensive simulations using the high-resolution building energy simulation program EnergyPlus. Sensitivity and uncertainty analyses were carried out to evaluate the behaviour of the ANN model, and the variations in the best and worst performance for several typologies were analysed in relation to variations in the input parameters and building characteristics. The results obtained indicate that an ANN can represent the interaction between input and output data for a vast and diverse building stock. Sensitivity analysis showed that no single input parameter can be identified as the main factor responsible for the building energy performance. The uncertainty associated with several parameters plays a major role in assessing building energy performance, together with the facade area and the shell-to-floor ratio. The results of this study may have a profound impact as ANNs could be applied in the future to define regulations in many countries, with positive effects on optimizing the energy consumption. - Highlights: • We model several typologies which have variation in input parameters. • We evaluate the accuracy of surrogate models for labelling purposes. • ANN is applied to model the building stock. • Uncertainty in building plays a major role in the building energy performance. • Results show that ANN could help to develop building energy labelling systems

  2. Fast dictionary-based reconstruction for diffusion spectrum imaging.

    Science.gov (United States)

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F; Yendiki, Anastasia; Wald, Lawrence L; Adalsteinsson, Elfar

    2013-11-01

    Diffusion spectrum imaging reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using MATLAB running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using principal component analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm.

  3. HAWC Analysis of the Crab Nebula Using Neural-Net Energy Reconstruction

    Science.gov (United States)

    Marinelli, Samuel; HAWC Collaboration

    2017-01-01

    The HAWC (High-Altitude Water-Cherenkov) experiment is a TeV γ-ray observatory located 4100 m above sea level on the Sierra Negra mountain in Puebla, Mexico. The detector consists of 300 water-filled tanks, each instrumented with 4 photomuliplier tubes that utilize the water-Cherenkov technique to detect atmospheric air showers produced by cosmic γ rays. Construction of HAWC was completed in March, 2015. The experiment's wide field of view (2 sr) and high duty cycle (> 95 %) make it a powerful survey instrument sensitive to pulsar wind nebulae, supernova remnants, active galactic nuclei, and other γ-ray sources. The mechanisms of particle acceleration at these sources can be studied by analyzing their energy spectra. To this end, we have developed an event-by-event energy-reconstruction algorithm employing an artificial neural network to estimate energies of primary γ rays. The Crab Nebula, the brightest source of TeV photons, makes an excellent calibration source for this technique. We will present preliminary results from an analysis of the Crab energy spectrum using this new energy-reconstruction method. This work was supported by the National Science Foundation.

  4. Electron and photon reconstruction and performance in ATLAS using a dynamical, topological cell clustering-based approach

    CERN Document Server

    The ATLAS collaboration

    2017-01-01

    The electron and photon reconstruction in ATLAS has moved towards the use of a dynamical, topo- logical cell-based approach for cluster building, owing to advancements in the calibration procedure which allow for such a method to be applied. The move to this new technique allows for improved measurements of electron and photon energies, particularly in situations where an electron radiates a bremsstrahlung photon, or a photon converts to an electron-poistron pair. This note details the changes to the ATLAS electron and photon reconstruction software, and assesses its performance under current LHC luminosity conditions using simulated data. Changes to the converted photon reconstruction are also detailed, which improve the reconstruction efficiency of double-track converted photons, as well as reducing the reconstruction of spurious one-track converted photons. The performance of the new reconstruction algorithm is also presented in a number of important topologies relevant to precision Standard Model physics,...

  5. Evaluation of proxy-based millennial reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Terry C.K.; Tsao, Min [University of Victoria, Department of Mathematics and Statistics, Victoria, BC (Canada); Zwiers, Francis W. [Environment Canada, Climate Research Division, Toronto, ON (Canada)

    2008-08-15

    A range of existing statistical approaches for reconstructing historical temperature variations from proxy data are compared using both climate model data and real-world paleoclimate proxy data. We also propose a new method for reconstruction that is based on a state-space time series model and Kalman filter algorithm. The state-space modelling approach and the recently developed RegEM method generally perform better than their competitors when reconstructing interannual variations in Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen to perform well when reconstructing surface air temperature variability on decadal time scales. An advantage of the new method is that it can incorporate additional, non-temperature, information into the reconstruction, such as the estimated response to external forcing, thereby permitting a simultaneous reconstruction and detection analysis as well as future projection. An application of these extensions is also demonstrated in the paper. (orig.)

  6. Sexuality and gender in contemporary women's Gothic fiction - Angela Carter's and Anne Rice's Vampires: Angela Carter's and Anne Rice's Vampires

    OpenAIRE

    Fernanda Sousa Carvalho

    2009-01-01

    xxx In this thesis, I provide an analysis of Angela Carter's and Anne Rice's works based on their depiction of vampires. My corpus is composed by Carter's short stories 'The Loves of Lady Purple' and 'The Lady of the House of Love' and by Rice's novels The Vampire Lestat and The Queen of the Damned. My analysis of this corpus is based on four approaches: a comparison between Carter's and Rice's works, supported by their common use of vampire characters; an investigation of how this use con...

  7. Track reconstruction for the Mu3e experiment based on a novel Multiple Scattering fit

    Directory of Open Access Journals (Sweden)

    Kozlinskiy Alexandr

    2017-01-01

    Full Text Available The Mu3e experiment is designed to search for the lepton flavor violating decay μ+ → e+e+e−. The aim of the experiment is to reach a branching ratio sensitivity of 10−16. In a first phase the experiment will be performed at an existing beam line at the Paul-Scherrer Institute (Switzerland providing 108 muons per second, which will allow to reach a sensitivity of 2 · 10−15. The muons with a momentum of about 28 MeV/c are stopped and decay at rest on a target. The decay products (positrons and electrons with energies below 53MeV are measured by a tracking detector consisting of two double layers of 50 μm thin silicon pixel sensors. The high granularity of the pixel detector with a pixel size of 80 μm × 80 μm allows for a precise track reconstruction in the high multiplicity environment of the Mu3e experiment, reaching 100 tracks per reconstruction frame of 50 ns in the final phase of the experiment. To deal with such high rates and combinatorics, the Mu3e track reconstruction uses a novel fit algorithm that in the simplest case takes into account only the multiple scattering, which allows for a fast online tracking on a GPU based filter farm. An implementation of the 3-dimensional multiple scattering fit based on hit triplets is described. The extension of the fit that takes into account energy losses and pixel size is used for offline track reconstruction. The algorithm and performance of the offline track reconstruction based on a full Geant4 simulation of the Mu3e detector are presented.

  8. Fast Dictionary-Based Reconstruction for Diffusion Spectrum Imaging

    Science.gov (United States)

    Bilgic, Berkin; Chatnuntawech, Itthi; Setsompop, Kawin; Cauley, Stephen F.; Yendiki, Anastasia; Wald, Lawrence L.; Adalsteinsson, Elfar

    2015-01-01

    Diffusion Spectrum Imaging (DSI) reveals detailed local diffusion properties at the expense of substantially long imaging times. It is possible to accelerate acquisition by undersampling in q-space, followed by image reconstruction that exploits prior knowledge on the diffusion probability density functions (pdfs). Previously proposed methods impose this prior in the form of sparsity under wavelet and total variation (TV) transforms, or under adaptive dictionaries that are trained on example datasets to maximize the sparsity of the representation. These compressed sensing (CS) methods require full-brain processing times on the order of hours using Matlab running on a workstation. This work presents two dictionary-based reconstruction techniques that use analytical solutions, and are two orders of magnitude faster than the previously proposed dictionary-based CS approach. The first method generates a dictionary from the training data using Principal Component Analysis (PCA), and performs the reconstruction in the PCA space. The second proposed method applies reconstruction using pseudoinverse with Tikhonov regularization with respect to a dictionary. This dictionary can either be obtained using the K-SVD algorithm, or it can simply be the training dataset of pdfs without any training. All of the proposed methods achieve reconstruction times on the order of seconds per imaging slice, and have reconstruction quality comparable to that of dictionary-based CS algorithm. PMID:23846466

  9. Artificial Neural Networks (ANNs for flood forecasting at Dongola Station in the River Nile, Sudan

    Directory of Open Access Journals (Sweden)

    Sulafa Hag Elsafi

    2014-09-01

    Full Text Available Heavy seasonal rains cause the River Nile in Sudan to overflow and flood the surroundings areas. The floods destroy houses, crops, roads, and basic infrastructure, resulting in the displacement of people. This study aimed to forecast the River Nile flow at Dongola Station in Sudan using an Artificial Neural Network (ANN as a modeling tool and validated the accuracy of the model against actual flow. The ANN model was formulated to simulate flows at a certain location in the river reach, based on flow at upstream locations. Different procedures were applied to predict flooding by the ANN. Readings from stations along the Blue Nile, White Nile, Main Nile, and River Atbara between 1965 and 2003 were used to predict the likelihood of flooding at Dongola Station. The analysis indicated that the ANN provides a reliable means of detecting the flood hazard in the River Nile.

  10. Reconstruction of the energy flux and search for squarks and gluinos in D0 experiment; Reconstruction du flux d'energie et recherche de squarks et gluinos dans l'experience D0

    Energy Technology Data Exchange (ETDEWEB)

    Ridel, M

    2002-04-01

    The D{phi} experiment is located at the Fermi National Accelerator Laboratory on the TeVatron proton-antiproton collider. The Run II has started in march 2001 after 5 years of shutdown and will allow D{phi} extend its reach in squarks and gluinos searches, particles predicted by supersymmetry. In this work, I focussed on their decays that lead to signature with jets and missing transverse energy. But before the data taking started, I studied both software and hardware ways to improve the energy measurement which is crucial for jets and for missing transverse energy. Energy deposits in the calorimeter has been clustered with cellNN, at the cell level instead of the tower level. Efforts have been made to take advantage of the calorimeter granularity to aim at individual particles showers reconstruction. CellNN starts from the third floor which has a quadruple granularity compared to the other floors. The longitudinal information has been used to detect electromagnetic and hadronic showers overlaps. Then, clusters and reconstructed tracks from the central detectors are combined and their energies compared. The better measurement is kept. Using this procedure allows to improve the reconstruction of each event energy flow. The efficiency of the current calorimeter triggers has been determined. They has been used to perform a Monte Carlo search analysis of squarks and gluinos in the mSUGRA framework. The lower bound that Ddiameter will be able to put on squarks and gluinos masses with a 100 pb{sup -1} integrated luminosity has been predicted. The use of the energy flow instead of standard reconstruction tools will be able to improve this lower limit. (author)

  11. Variance-based Salt Body Reconstruction

    KAUST Repository

    Ovcharenko, Oleg

    2017-05-26

    Seismic inversions of salt bodies are challenging when updating velocity models based on Born approximation- inspired gradient methods. We propose a variance-based method for velocity model reconstruction in regions complicated by massive salt bodies. The novel idea lies in retrieving useful information from simultaneous updates corresponding to different single frequencies. Instead of the commonly used averaging of single-iteration monofrequency gradients, our algorithm iteratively reconstructs salt bodies in an outer loop based on updates from a set of multiple frequencies after a few iterations of full-waveform inversion. The variance among these updates is used to identify areas where considerable cycle-skipping occurs. In such areas, we update velocities by interpolating maximum velocities within a certain region. The result of several recursive interpolations is later used as a new starting model to improve results of conventional full-waveform inversion. An application on part of the BP 2004 model highlights the evolution of the proposed approach and demonstrates its effectiveness.

  12. Predicting the Deflections of Micromachined Electrostatic Actuators Using Artificial Neural Network (ANN

    Directory of Open Access Journals (Sweden)

    Hing Wah LEE

    2009-03-01

    Full Text Available In this study, a general purpose Artificial Neural Network (ANN model based on the feed-forward back-propagation (FFBP algorithm has been used to predict the deflections of a micromachined structures actuated electrostatically under different loadings and geometrical parameters. A limited range of simulation results obtained via CoventorWare™ numerical software will be used initially to train the neural network via back-propagation algorithm. The micromachined structures considered in the analyses are diaphragm, fixed-fixed beams and cantilevers. ANN simulation results are compared with results obtained via CoventorWare™ simulations and existing analytical work for validation purpose. The proposed ANN model accurately predicts the deflections of the micromachined structures with great reduction of simulation efforts, establishing the method superiority. This method can be extended for applications in other sensors particularly for modeling sensors applying electrostatic actuation which are difficult in nature due to the inherent non-linearity of the electro-mechanical coupling response.

  13. DD4Hep based event reconstruction

    CERN Document Server

    AUTHOR|(SzGeCERN)683529; Frank, Markus; Gaede, Frank-Dieter; Hynds, Daniel; Lu, Shaojun; Nikiforou, Nikiforos; Petric, Marko; Simoniello, Rosa; Voutsinas, Georgios Gerasimos

    The DD4HEP detector description toolkit offers a flexible and easy-to-use solution for the consistent and complete description of particle physics detectors in a single system. The sub-component DDREC provides a dedicated interface to the detector geometry as needed for event reconstruction. With DDREC there is no need to define an additional, separate reconstruction geometry as is often done in HEP, but one can transparently extend the existing detailed simulation model to be also used for the reconstruction. Based on the extension mechanism of DD4HEP, DDREC allows one to attach user defined data structures to detector elements at all levels of the geometry hierarchy. These data structures define a high level view onto the detectors describing their physical properties, such as measurement layers, point resolutions, and cell sizes. For the purpose of charged particle track reconstruction, dedicated surface objects can be attached to every volume in the detector geometry. These surfaces provide the measuremen...

  14. 3D dictionary learning based iterative cone beam CT reconstruction

    Directory of Open Access Journals (Sweden)

    Ti Bai

    2014-03-01

    Full Text Available Purpose: This work is to develop a 3D dictionary learning based cone beam CT (CBCT reconstruction algorithm on graphic processing units (GPU to improve the quality of sparse-view CBCT reconstruction with high efficiency. Methods: A 3D dictionary containing 256 small volumes (atoms of 3 × 3 × 3 was trained from a large number of blocks extracted from a high quality volume image. On the basis, we utilized cholesky decomposition based orthogonal matching pursuit algorithm to find the sparse representation of each block. To accelerate the time-consuming sparse coding in the 3D case, we implemented the sparse coding in a parallel fashion by taking advantage of the tremendous computational power of GPU. Conjugate gradient least square algorithm was adopted to minimize the data fidelity term. Evaluations are performed based on a head-neck patient case. FDK reconstruction with full dataset of 364 projections is used as the reference. We compared the proposed 3D dictionary learning based method with tight frame (TF by performing reconstructions on a subset data of 121 projections. Results: Compared to TF based CBCT reconstruction that shows good overall performance, our experiments indicated that 3D dictionary learning based CBCT reconstruction is able to recover finer structures, remove more streaking artifacts and also induce less blocky artifacts. Conclusion: 3D dictionary learning based CBCT reconstruction algorithm is able to sense the structural information while suppress the noise, and hence to achieve high quality reconstruction under the case of sparse view. The GPU realization of the whole algorithm offers a significant efficiency enhancement, making this algorithm more feasible for potential clinical application.-------------------------------Cite this article as: Bai T, Yan H, Shi F, Jia X, Lou Y, Xu Q, Jiang S, Mou X. 3D dictionary learning based iterative cone beam CT reconstruction. Int J Cancer Ther Oncol 2014; 2(2:020240. DOI: 10

  15. A Kalman Filter-Based Method for Reconstructing GMS-5 Global Solar Radiation by Introduction of In Situ Data

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2013-06-01

    Full Text Available Solar radiation is an important input for various land-surface energy balance models. Global solar radiation data retrieved from the Japanese Geostationary Meteorological Satellite 5 (GMS-5/Visible and Infrared Spin Scan Radiometer (VISSR has been widely used in recent years. However, due to the impact of clouds, aerosols, solar elevation angle and bidirectional reflection, spatial or temporal deficiencies often exist in solar radiation datasets that are derived from satellite remote sensing, which can seriously affect the accuracy of application models of land-surface energy balance. The goal of reconstructing radiation data is to simulate the seasonal variation patterns of solar radiation, using various statistical and numerical analysis methods to interpolate the missing observations and optimize the whole time-series dataset. In the current study, a reconstruction method based on data assimilation is proposed. Using a Kalman filter as the assimilation algorithm, the retrieved radiation values are corrected through the continuous introduction of local in-situ global solar radiation (GSR provided by the China Meteorological Data Sharing Service System (Daily radiation dataset_Version 3 which were collected from 122 radiation data collection stations over China. A complete and optimal set of time-series data is ultimately obtained. This method is applied and verified in China’s northern agricultural areas (humid regions, semi-humid regions and semi-arid regions in a warm temperate zone. The results show that the mean value and standard deviation of the reconstructed solar radiation data series are significantly improved, with greater consistency with ground-based observations than the series before reconstruction. The method implemented in this study provides a new solution for the time-series reconstruction of surface energy parameters, which can provide more reliable data for scientific research and regional renewable-energy planning.

  16. Research on compressive sensing reconstruction algorithm based on total variation model

    Science.gov (United States)

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  17. Design of an Experiment to Measure ann Using 3H(γ, pnn at HIγS★

    Directory of Open Access Journals (Sweden)

    Friesen F.Q.L.

    2016-01-01

    Full Text Available We provide an update on the development of an experiment at TUNL for determining the 1S0 neutron-neutron (nn scattering length (ann from differential cross-section measurements of three-body photodisintegration of the triton. The experiment will be conducted using a linearly polarized gamma-ray beam at the High Intensity Gamma-ray Source (HIγS and tritium gas contained in thin-walled cells. The main components of the planned experiment are a 230 Ci gas target system, a set of wire chambers and silicon strip detectors on each side of the beam axis, and an array of neutron detectors on each side beyond the silicon detectors. The protons emitted in the reaction are tracked in the wire chambers and their energy and position are measured in silicon strip detectors. The first iteration of the experiment will be simplified, making use of a collimator system, and silicon detectors to interrogate the main region of interest near 90° in the polar angle. Monte-Carlo simulations based on rigorous 3N calculations have been conducted to validate the sensitivity of the experimental setup to ann.

  18. Fast implementations of reconstruction-based scatter compensation in fully 3D SPECT image reconstruction

    International Nuclear Information System (INIS)

    Kadrmas, Dan J.; Karimi, Seemeen S.; Frey, Eric C.; Tsui, Benjamin M.W.

    1998-01-01

    Accurate scatter compensation in SPECT can be performed by modelling the scatter response function during the reconstruction process. This method is called reconstruction-based scatter compensation (RBSC). It has been shown that RBSC has a number of advantages over other methods of compensating for scatter, but using RBSC for fully 3D compensation has resulted in prohibitively long reconstruction times. In this work we propose two new methods that can be used in conjunction with existing methods to achieve marked reductions in RBSC reconstruction times. The first method, coarse-grid scatter modelling, significantly accelerates the scatter model by exploiting the fact that scatter is dominated by low-frequency information. The second method, intermittent RBSC, further accelerates the reconstruction process by limiting the number of iterations during which scatter is modelled. The fast implementations were evaluated using a Monte Carlo simulated experiment of the 3D MCAT phantom with 99m Tc tracer, and also using experimentally acquired data with 201 Tl tracer. Results indicated that these fast methods can reconstruct, with fully 3D compensation, images very similar to those obtained using standard RBSC methods, and in reconstruction times that are an order of magnitude shorter. Using these methods, fully 3D iterative reconstruction with RBSC can be performed well within the realm of clinically realistic times (under 10 minutes for 64x64x24 image reconstruction). (author)

  19. Evidence-Based ACL Reconstruction

    Directory of Open Access Journals (Sweden)

    E. Carlos RODRIGUEZ-MERCHAN

    2015-01-01

    Full Text Available There is controversy in the literature regarding a number of topics related to anterior cruciate ligament (ACLreconstruction. The purpose of this article is to answer the following questions: 1 Bone patellar tendon bone (BPTB reconstruction or hamstring reconstruction (HR; 2 Double bundle or single bundle; 3 Allograft or authograft; 4 Early or late reconstruction; 5 Rate of return to sports after ACL reconstruction; 6 Rate of osteoarthritis after ACL reconstruction. A Cochrane Library and PubMed (MEDLINE search of systematic reviews and meta-analysis related to ACL reconstruction was performed. The key words were: ACL reconstruction, systematic reviews and meta-analysis. The main criteria for selection were that the articles were systematic reviews and meta-analysesfocused on the aforementioned questions. Sixty-nine articles were found, but only 26 were selected and reviewed because they had a high grade (I-II of evidence. BPTB-R was associated with better postoperative knee stability but with a higher rate of morbidity. However, the results of both procedures in terms of functional outcome in the long-term were similar. The double-bundle ACL reconstruction technique showed better outcomes in rotational laxity, although functional recovery was similar between single-bundle and double-bundle. Autograft yielded better results than allograft. There was no difference between early and delayed reconstruction. 82% of patients were able to return to some kind of sport participation. 28% of patients presented radiological signs of osteoarthritis with a follow-up of minimum 10 years.

  20. A computation ANN model for quantifying the global solar radiation: A case study of Al-Aqabah-Jordan

    International Nuclear Information System (INIS)

    Abolgasem, I M; Alghoul, M A; Ruslan, M H; Chan, H Y; Khrit, N G; Sopian, K

    2015-01-01

    In this paper, a computation model is developed to predict the global solar radiation (GSR) in Aqaba city based on the data recorded with association of Artificial Neural Networks (ANN). The data used in this work are global solar radiation (GSR), sunshine duration, maximum and minimum air temperature and relative humidity. These data are available from Jordanian meteorological station over a period of two years. The quality of GSR forecasting is compared by using different Learning Algorithms. The decision of changing the ANN architecture is essentially based on the predicted results to obtain the best ANN model for monthly and seasonal GSR. Different configurations patterns were tested using available observed data. It was found that the model using mainly sunshine duration and air temperature as inputs gives accurate results. The ANN model efficiency and the mean square error values show that the prediction model is accurate. It is found that the effect of the three learning algorithms on the accuracy of the prediction model at the training and testing stages for each time scale is mostly within the same accuracy range. (paper)

  1. [Anne Arold. Kontrastive Analyse...] / Paul Alvre

    Index Scriptorium Estoniae

    Alvre, Paul, 1921-2008

    2001-01-01

    Arvustus: Arold, Anne. Kontrastive analyse der Wortbildungsmuster im Deutschen und im Estnischen (am Beispiel der Aussehensadjektive). Tartu, 2000. (Dissertationes philologiae germanicae Universitatis Tartuensis)

  2. Lateral particle density reconstruction from the energy deposits of particles in the KASCADE-Grande detector stations

    International Nuclear Information System (INIS)

    Toma, G.; Brancus, I.M.; Mitrica, B.; Sima, O.; Rebel, H.

    2005-01-01

    The study of primary cosmic rays with energies greater than 10 14 eV is done mostly by indirect observation techniques such as the study of Extensive Air Showers (EAS). In the much larger framework effort of inferring data on the mass and energy of the primaries from EAS observables, the present study aims at delivering a versatile method and software tool that will be used to reconstruct lateral particle densities from the energy deposits of particles in the KASCADE-Grande detector stations. The study has been performed on simulated events, by taking into account the interaction of the EAS components with the detector array (energy deposits). The energy deposits have been parametrized for different incident energies and angles. Thus it is possible to reconstruct the particle densities in detectors from the energy deposits. A correlation between lateral particle density and primary mass and primary energy (at ∼ 600 m from shower core) has been established. The study puts great emphasis on the quality of reconstruction and also on the speed of the technique. The data obtained from the study on simulated events will be used soon on real events detected by the KASCADE-Grande array. (authors)

  3. Level-set-based reconstruction algorithm for EIT lung images: first clinical results.

    Science.gov (United States)

    Rahmati, Peyman; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz; Adler, Andy

    2012-05-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure-volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM.

  4. Level-set-based reconstruction algorithm for EIT lung images: first clinical results

    International Nuclear Information System (INIS)

    Rahmati, Peyman; Adler, Andy; Soleimani, Manuchehr; Pulletz, Sven; Frerichs, Inéz

    2012-01-01

    We show the first clinical results using the level-set-based reconstruction algorithm for electrical impedance tomography (EIT) data. The level-set-based reconstruction method (LSRM) allows the reconstruction of non-smooth interfaces between image regions, which are typically smoothed by traditional voxel-based reconstruction methods (VBRMs). We develop a time difference formulation of the LSRM for 2D images. The proposed reconstruction method is applied to reconstruct clinical EIT data of a slow flow inflation pressure–volume manoeuvre in lung-healthy and adult lung-injury patients. Images from the LSRM and the VBRM are compared. The results show comparable reconstructed images, but with an improved ability to reconstruct sharp conductivity changes in the distribution of lung ventilation using the LSRM. (paper)

  5. Gamma regularization based reconstruction for low dose CT

    International Nuclear Information System (INIS)

    Zhang, Junfeng; Chen, Yang; Hu, Yining; Luo, Limin; Shu, Huazhong; Li, Bicao; Liu, Jin; Coatrieux, Jean-Louis

    2015-01-01

    Reducing the radiation in computerized tomography is today a major concern in radiology. Low dose computerized tomography (LDCT) offers a sound way to deal with this problem. However, more severe noise in the reconstructed CT images is observed under low dose scan protocols (e.g. lowered tube current or voltage values). In this paper we propose a Gamma regularization based algorithm for LDCT image reconstruction. This solution is flexible and provides a good balance between the regularizations based on l 0 -norm and l 1 -norm. We evaluate the proposed approach using the projection data from simulated phantoms and scanned Catphan phantoms. Qualitative and quantitative results show that the Gamma regularization based reconstruction can perform better in both edge-preserving and noise suppression when compared with other norms. (paper)

  6. Energy reconstruction and calibration algorithms for the ATLAS electromagnetic calorimeter

    CERN Document Server

    Delmastro, M

    2003-01-01

    The work of this thesis is devoted to the study, development and optimization of the algorithms of energy reconstruction and calibration for the electromagnetic calorimeter (EMC) of the ATLAS experiment, presently under installation and commissioning at the CERN Large Hadron Collider in Geneva (Switzerland). A deep study of the electrical characteristics of the detector and of the signals formation and propagation is conduced: an electrical model of the detector is developed and analyzed through simulations; a hardware model (mock-up) of a group of the EMC readout cells has been built, allowing the direct collection and properties study of the signals emerging from the EMC cells. We analyze the existing multiple-sampled signal reconstruction strategy, showing the need of an improvement in order to reach the advertised performances of the detector. The optimal filtering reconstruction technique is studied and implemented, taking into account the differences between the ionization and calibration waveforms as e...

  7. Algorithm Development for Multi-Energy SXR based Electron Temperature Profile Reconstruction

    Science.gov (United States)

    Clayton, D. J.; Tritz, K.; Finkenthal, M.; Kumar, D.; Stutman, D.

    2012-10-01

    New techniques utilizing computational tools such as neural networks and genetic algorithms are being developed to infer plasma electron temperature profiles on fast time scales (> 10 kHz) from multi-energy soft-x-ray (ME-SXR) diagnostics. Traditionally, a two-foil SXR technique, using the ratio of filtered continuum emission measured by two SXR detectors, has been employed on fusion devices as an indirect method of measuring electron temperature. However, these measurements can be susceptible to large errors due to uncertainties in time-evolving impurity density profiles, leading to unreliable temperature measurements. To correct this problem, measurements using ME-SXR diagnostics, which use three or more filtered SXR arrays to distinguish line and continuum emission from various impurities, in conjunction with constraints from spectroscopic diagnostics, can be used to account for unknown or time evolving impurity profiles [K. Tritz et al, Bull. Am. Phys. Soc. Vol. 56, No. 12 (2011), PP9.00067]. On NSTX, ME-SXR diagnostics can be used for fast (10-100 kHz) temperature profile measurements, using a Thomson scattering diagnostic (60 Hz) for periodic normalization. The use of more advanced algorithms, such as neural network processing, can decouple the reconstruction of the temperature profile from spectral modeling.

  8. Modelling and automatic reactive power control of isolated wind-diesel hybrid power systems using ANN

    International Nuclear Information System (INIS)

    Bansal, R.C.

    2008-01-01

    This paper presents an artificial neural network (ANN) based approach to tune the parameters of the static var compensator (SVC) reactive power controller over a wide range of typical load model parameters. The gains of PI (proportional integral) based SVC are optimised for typical values of the load voltage characteristics (n q ) by conventional techniques. Using the generated data, the method of multi-layer feed forward ANN with error back propagation training is employed to tune the parameters of the SVC. An ANN tuned SVC controller has been applied to control the reactive power of a variable slip/speed isolated wind-diesel hybrid power system. It is observed that the maximum deviations of all parameters are more for larger values of n q . It has been shown that initially synchronous generator supplies the reactive power required by the induction generator and/or load, and the latter reactive power is purely supplied by the SVC

  9. Modelling and automatic reactive power control of isolated wind-diesel hybrid power systems using ANN

    Energy Technology Data Exchange (ETDEWEB)

    Bansal, R.C. [Electrical and Electronics Engineering Division, School of Engineering and Physics, The University of the South Pacific, Suva (Fiji)

    2008-02-15

    This paper presents an artificial neural network (ANN) based approach to tune the parameters of the static var compensator (SVC) reactive power controller over a wide range of typical load model parameters. The gains of PI (proportional integral) based SVC are optimised for typical values of the load voltage characteristics (n{sub q}) by conventional techniques. Using the generated data, the method of multi-layer feed forward ANN with error back propagation training is employed to tune the parameters of the SVC. An ANN tuned SVC controller has been applied to control the reactive power of a variable slip/speed isolated wind-diesel hybrid power system. It is observed that the maximum deviations of all parameters are more for larger values of n{sub q}. It has been shown that initially synchronous generator supplies the reactive power required by the induction generator and/or load, and the latter reactive power is purely supplied by the SVC. (author)

  10. Reconstruction of the energy flux and search for squarks and gluinos in D0 experiment

    International Nuclear Information System (INIS)

    Ridel, M.

    2002-04-01

    The DΦ experiment is located at the Fermi National Accelerator Laboratory on the TeVatron proton-antiproton collider. The Run II has started in march 2001 after 5 years of shutdown and will allow DΦ extend its reach in squarks and gluinos searches, particles predicted by supersymmetry. In this work, I focussed on their decays that lead to signature with jets and missing transverse energy. But before the data taking started, I studied both software and hardware ways to improve the energy measurement which is crucial for jets and for missing transverse energy. Energy deposits in the calorimeter has been clustered with cellNN, at the cell level instead of the tower level. Efforts have been made to take advantage of the calorimeter granularity to aim at individual particles showers reconstruction. CellNN starts from the third floor which has a quadruple granularity compared to the other floors. The longitudinal information has been used to detect electromagnetic and hadronic showers overlaps. Then, clusters and reconstructed tracks from the central detectors are combined and their energies compared. The better measurement is kept. Using this procedure allows to improve the reconstruction of each event energy flow. The efficiency of the current calorimeter triggers has been determined. They has been used to perform a Monte Carlo search analysis of squarks and gluinos in the mSUGRA framework. The lower bound that Ddiameter will be able to put on squarks and gluinos masses with a 100 pb -1 integrated luminosity has been predicted. The use of the energy flow instead of standard reconstruction tools will be able to improve this lower limit. (author)

  11. Accelerated gradient methods for total-variation-based CT image reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology

    2011-07-01

    Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)

  12. Development of ANN-based models to predict the static response and dynamic response of a heat exchanger in a real MVAC system

    International Nuclear Information System (INIS)

    Hu Qinhua; So, Albert T P; Tse, W L; Ren, Qingchang

    2005-01-01

    This paper presents a systematic approach to develop artificial neural network (ANN) models to predict the performance of a heat exchanger operating in real mechanical ventilation and air-conditioning (MVAC) system. Two approaches were attempted and presented. Every detailed components of the MVAC system have been considered and we attempt to model each of them by one ANN. This study used the neural network technique to obtain a static and a dynamic model for a heat exchanger mounted in an air handler unit (AHU), which is the key component of the MVAC system. It has been verified that almost all of the predicted values of the ANN model were within 95% - 105% of the measured values, with a consistent mean relative error (MRE) smaller than 2.5%. The paper details our experiences in using ANNs, especially those with back-propagation (BP) structures. Also, the weights and biases of our trained-up ANN models are listed out, which serve as good reference for readers to deal with their own situations

  13. Using ANN and EPR models to predict carbon monoxide concentrations in urban area of Tabriz

    Directory of Open Access Journals (Sweden)

    Mohammad Shakerkhatibi

    2015-09-01

    Full Text Available Background: Forecasting of air pollutants has become a popular topic of environmental research today. For this purpose, the artificial neural network (AAN technique is widely used as a reliable method for forecasting air pollutants in urban areas. On the other hand, the evolutionary polynomial regression (EPR model has recently been used as a forecasting tool in some environmental issues. In this research, we compared the ability of these models to forecast carbon monoxide (CO concentrations in the urban area of Tabriz city. Methods: The dataset of CO concentrations measured at the fixed stations operated by the East Azerbaijan Environmental Office along with meteorological data obtained from the East Azerbaijan Meteorological Bureau from March 2007 to March 2013, were used as input for the ANN and EPR models. Results: Based on the results, the performance of ANN is more reliable in comparison with EPR. Using the ANN model, the correlation coefficient values at all monitoring stations were calculated above 0.85. Conversely, the R2 values for these stations were obtained <0.41 using the EPR model. Conclusion: The EPR model could not overcome the nonlinearities of input data. However, the ANN model displayed more accurate results compared to the EPR. Hence, the ANN models are robust tools for predicting air pollutant concentrations.

  14. Rapidly 3D Texture Reconstruction Based on Oblique Photography

    Directory of Open Access Journals (Sweden)

    ZHANG Chunsen

    2015-07-01

    Full Text Available This paper proposes a city texture fast reconstruction method based on aerial tilt image for reconstruction of three-dimensional city model. Based on the photogrammetry and computer vision theory and using the city building digital surface model obtained by prior treatment, through collinear equation calculation geometric projection of object and image space, to obtain the three-dimensional information and texture information of the structure and through certain the optimal algorithm selecting the optimal texture on the surface of the object, realize automatic extraction of the building side texture and occlusion handling of the dense building texture. The real image texture reconstruction results show that: the method to the 3D city model texture reconstruction has the characteristics of high degree of automation, vivid effect and low cost and provides a means of effective implementation for rapid and widespread real texture rapid reconstruction of city 3D model.

  15. Designing on ICT reconstruction software based on DSP techniques

    International Nuclear Information System (INIS)

    Liu Jinhui; Xiang Xincheng

    2006-01-01

    The convolution back project (CBP) algorithm is used to realize the CT image's reconstruction in ICT generally, which is finished by using PC or workstation. In order to add the ability of multi-platform operation of CT reconstruction software, a CT reconstruction method based on modern digital signal processor (DSP) technique is proposed and realized in this paper. The hardware system based on TI's C6701 DSP processor is selected to support the CT software construction. The CT reconstruction software is compiled only using assembly language related to the DSP hardware. The CT software can be run on TI's C6701 EVM board by inputting the CT data, and can get the CT Images that satisfy the real demands. (authors)

  16. Pollen-based continental climate reconstructions at 6 and 21 ka: a global synthesis

    Energy Technology Data Exchange (ETDEWEB)

    Bartlein, P.J. [University of Oregon, Department of Geography, Eugene, Oregon (United States); Harrison, S.P. [University of Bristol, School of Geographical Sciences, Bristol (United Kingdom); Macquarie University, School of Biological Sciences, North Ryde, NSW (Australia); Brewer, S. [University of Wyoming, Botany Department, Wyoming (United States); Connor, S. [University of the Algarve, Centre for Marine and Environmental Research, Faro (Portugal); Davis, B.A.S. [Ecole Polytechnique Federale de Lausanne, School of Architecture, Civil and Environmental Engineering, Lausanne (Switzerland); Gajewski, K.; Viau, A.E. [University of Ottawa, Department of Geography, Ottawa, ON (Canada); Guiot, J. [CEREGE, Aix-en-Provence cedex 4 (France); Harrison-Prentice, T.I. [GTZ, PAKLIM, Jakarta (Indonesia); Henderson, A. [University of Minnesota, Department of Geology and Geophysics, Minneapolis, MN (United States); Peyron, O. [Laboratoire Chrono-Environnement UMR 6249 CNRS-UFC UFR Sciences et Techniques, Besancon Cedex (France); Prentice, I.C. [Macquarie University, School of Biological Sciences, North Ryde, NSW (Australia); University of Bristol, QUEST, Department of Earth Sciences, Bristol (United Kingdom); Scholze, M. [University of Bristol, QUEST, Department of Earth Sciences, Bristol (United Kingdom); Seppae, H. [University of Helsinki, Department of Geology, P.O. Box 65, Helsinki (Finland); Shuman, B. [University of Wyoming, Department of Geology and Geophysics, Laramie, WY (United States); Sugita, S. [Tallinn University, Institute of Ecology, Tallinn (Estonia); Thompson, R.S. [US Geological Survey, PO Box 25046, Denver, CO (United States); Williams, J. [University of Wisconsin, Department of Geography, Madison, WI (United States); Wu, H. [Chinese Academy of Sciences, Key Laboratory of Cenozoic Geology and Environment, Institute of Geology and Geophysics, Beijing (China)

    2011-08-15

    Subfossil pollen and plant macrofossil data derived from {sup 14}C-dated sediment profiles can provide quantitative information on glacial and interglacial climates. The data allow climate variables related to growing-season warmth, winter cold, and plant-available moisture to be reconstructed. Continental-scale reconstructions have been made for the mid-Holocene (MH, around 6 ka) and Last Glacial Maximum (LGM, around 21 ka), allowing comparison with palaeoclimate simulations currently being carried out as part of the fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change. The synthesis of the available MH and LGM climate reconstructions and their uncertainties, obtained using modern-analogue, regression and model-inversion techniques, is presented for four temperature variables and two moisture variables. Reconstructions of the same variables based on surface-pollen assemblages are shown to be accurate and unbiased. Reconstructed LGM and MH climate anomaly patterns are coherent, consistent between variables, and robust with respect to the choice of technique. They support a conceptual model of the controls of Late Quaternary climate change whereby the first-order effects of orbital variations and greenhouse forcing on the seasonal cycle of temperature are predictably modified by responses of the atmospheric circulation and surface energy balance. (orig.)

  17. Perspectives of increasing energy efficiency on designing new and reconstruction of present city districts: World experiences and local recommendations

    Directory of Open Access Journals (Sweden)

    Pucar Mila

    2006-01-01

    Full Text Available With 20th century along came a significant increase of energy consumption and a serious ecological crisis caused by the extensive usage of fossil fuels (oil, coal. Because of that, many countries have declared regulations to lower the traditional energy consumption and to stimulate usage of renewable energy sources. This problem is particularly evident in residential buildings sector, because over 50% of the overall energy produced is slinked in this way. This paper gives methodological recommendations regarding the principles of energy efficient housing and general comfort improvement as well as evident advantages of passive solar panels compared to traditional energy sources (fossil fuels. These possibilities are considered in two different scenarios: reconstruction of already built city blocks, and energy efficient implementation in case of brand new structures. This paper considers two different case studies, one reconstructive - a city block in France, built in the mid 60’s and the other of energy efficient settlement in Greece "Solar Village", built in the 80’s, which was designed by bioclimatic principles from the very beginning. Energy efficient planning and design methodological recommendations based on these two examples are regarding the New Belgrade block 7/3, which has been built in the 50’s.

  18. Optimising training data for ANNs with Genetic Algorithms

    OpenAIRE

    Kamp , R. G.; Savenije , H. H. G.

    2006-01-01

    International audience; Artificial Neural Networks (ANNs) have proved to be good modelling tools in hydrology for rainfall-runoff modelling and hydraulic flow modelling. Representative datasets are necessary for the training phase in which the ANN learns the model's input-output relations. Good and representative training data is not always available. In this publication Genetic Algorithms (GA) are used to optimise training datasets. The approach is tested with an existing hydraulic model in ...

  19. Optimising training data for ANNs with Genetic Algorithms

    OpenAIRE

    R. G. Kamp; R. G. Kamp; H. H. G. Savenije

    2006-01-01

    Artificial Neural Networks (ANNs) have proved to be good modelling tools in hydrology for rainfall-runoff modelling and hydraulic flow modelling. Representative datasets are necessary for the training phase in which the ANN learns the model's input-output relations. Good and representative training data is not always available. In this publication Genetic Algorithms (GA) are used to optimise training datasets. The approach is tested with an existing hydraulic model in The Netherlands. An...

  20. Hybrid spectral CT reconstruction.

    Directory of Open Access Journals (Sweden)

    Darin P Clark

    Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with

  1. Hybrid spectral CT reconstruction

    Science.gov (United States)

    Clark, Darin P.

    2017-01-01

    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral

  2. Low-dose CT image reconstruction using gain intervention-based dictionary learning

    Science.gov (United States)

    Pathak, Yadunath; Arya, K. V.; Tiwari, Shailendra

    2018-05-01

    Computed tomography (CT) approach is extensively utilized in clinical diagnoses. However, X-ray residue in human body may introduce somatic damage such as cancer. Owing to radiation risk, research has focused on the radiation exposure distributed to patients through CT investigations. Therefore, low-dose CT has become a significant research area. Many researchers have proposed different low-dose CT reconstruction techniques. But, these techniques suffer from various issues such as over smoothing, artifacts, noise, etc. Therefore, in this paper, we have proposed a novel integrated low-dose CT reconstruction technique. The proposed technique utilizes global dictionary-based statistical iterative reconstruction (GDSIR) and adaptive dictionary-based statistical iterative reconstruction (ADSIR)-based reconstruction techniques. In case the dictionary (D) is predetermined, then GDSIR can be used and if D is adaptively defined then ADSIR is appropriate choice. The gain intervention-based filter is also used as a post-processing technique for removing the artifacts from low-dose CT reconstructed images. Experiments have been done by considering the proposed and other low-dose CT reconstruction techniques on well-known benchmark CT images. Extensive experiments have shown that the proposed technique outperforms the available approaches.

  3. Numerical weather prediction (NWP) and hybrid ARMA/ANN model to predict global radiation

    International Nuclear Information System (INIS)

    Voyant, Cyril; Muselli, Marc; Paoli, Christophe; Nivet, Marie-Laure

    2012-01-01

    We propose in this paper an original technique to predict global radiation using a hybrid ARMA/ANN model and data issued from a numerical weather prediction model (NWP). We particularly look at the multi-layer perceptron (MLP). After optimizing our architecture with NWP and endogenous data previously made stationary and using an innovative pre-input layer selection method, we combined it to an ARMA model from a rule based on the analysis of hourly data series. This model has been used to forecast the hourly global radiation for five places in Mediterranean area. Our technique outperforms classical models for all the places. The nRMSE for our hybrid model MLP/ARMA is 14.9% compared to 26.2% for the naïve persistence predictor. Note that in the standalone ANN case the nRMSE is 18.4%. Finally, in order to discuss the reliability of the forecaster outputs, a complementary study concerning the confidence interval of each prediction is proposed. -- Highlights: ► Time series forecasting with hybrid method based on the use of ALADIN numerical weather model, ANN and ARMA. ► Innovative pre-input layer selection method. ► Combination of optimized MLP and ARMA model obtained from a rule based on the analysis of hourly data series. ► Stationarity process (method and control) for the global radiation time series.

  4. Blind compressed sensing image reconstruction based on alternating direction method

    Science.gov (United States)

    Liu, Qinan; Guo, Shuxu

    2018-04-01

    In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.

  5. Assessment of spatial distribution of soil heavy metals using ANN-GA, MSLR and satellite imagery.

    Science.gov (United States)

    Naderi, Arman; Delavar, Mohammad Amir; Kaboudin, Babak; Askari, Mohammad Sadegh

    2017-05-01

    This study aims to assess and compare heavy metal distribution models developed using stepwise multiple linear regression (MSLR) and neural network-genetic algorithm model (ANN-GA) based on satellite imagery. The source identification of heavy metals was also explored using local Moran index. Soil samples (n = 300) were collected based on a grid and pH, organic matter, clay, iron oxide contents cadmium (Cd), lead (Pb) and zinc (Zn) concentrations were determined for each sample. Visible/near-infrared reflectance (VNIR) within the electromagnetic ranges of satellite imagery was applied to estimate heavy metal concentrations in the soil using MSLR and ANN-GA models. The models were evaluated and ANN-GA model demonstrated higher accuracy, and the autocorrelation results showed higher significant clusters of heavy metals around the industrial zone. The higher concentration of Cd, Pb and Zn was noted under industrial lands and irrigation farming in comparison to barren and dryland farming. Accumulation of industrial wastes in roads and streams was identified as main sources of pollution, and the concentration of soil heavy metals was reduced by increasing the distance from these sources. In comparison to MLSR, ANN-GA provided a more accurate indirect assessment of heavy metal concentrations in highly polluted soils. The clustering analysis provided reliable information about the spatial distribution of soil heavy metals and their sources.

  6. Quantum process reconstruction based on mutually unbiased basis

    International Nuclear Information System (INIS)

    Fernandez-Perez, A.; Saavedra, C.; Klimov, A. B.

    2011-01-01

    We study a quantum process reconstruction based on the use of mutually unbiased projectors (MUB projectors) as input states for a D-dimensional quantum system, with D being a power of a prime number. This approach connects the results of quantum-state tomography using mutually unbiased bases with the coefficients of a quantum process, expanded in terms of MUB projectors. We also study the performance of the reconstruction scheme against random errors when measuring probabilities at the MUB projectors.

  7. Tau reconstruction, energy calibration and identification at ATLAS

    International Nuclear Information System (INIS)

    Trottier-Mcdonald, Michel

    2012-01-01

    Tau leptons play a central role in the LHC physics programme, in particular as an important signature in many Higgs boson and supersymmetry searches. They are further used in Standard Model electroweak measurements, as well as detector-related studies like the determination of the missing transverse energy scale. Copious backgrounds from QCD processes call for both efficient identification of hadronically decaying tau leptons, as well as large suppression of fake candidates. A solid understanding of the combined performance of the calorimeter and tracking detectors is also required. We present the current status of the tau reconstruction, energy calibration and identification with the ATLAS detector at the LHC. Identification efficiencies are measured in W →τν events in data and compared with predictions from Monte Carlo simulations, whereas the misidentification probabilities of QCD jets and electrons are determined from various jet-enriched data samples and from Z → ee events, respectively. The tau energy scale calibration is described and systematic uncertainties on both energy scale and identification efficiencies discussed. (author)

  8. 3D reconstruction based on light field images

    Science.gov (United States)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  9. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  10. SU-F-T-261: Reconstruction of Initial Photon Fluence Based On EPID Images

    Energy Technology Data Exchange (ETDEWEB)

    Seliger, T; Engenhart-Cabillic, R [Philipp University of Marburg, Marburg (Germany); Czarnecki, D; Maeder, U; Zink, K [Technische Hochschule Mittelhessen - University of Applied Sciences, Giessen (Germany); Kussaether, R [MedCom GmbH, Darmstadt (Germany); Poppe, B [University Hospital for Medical Radiation Physics, Pius-Hospital, Medical Campus, Carl von Ossietzky University of Oldenburg (Germany)

    2016-06-15

    Purpose: Verifying an algorithm to reconstruct relative initial photon fluence for clinical use. Clinical EPID and CT images were acquired to reconstruct an external photon radiation treatment field. The reconstructed initial photon fluence could be used to verify the treatment or calculate the applied dose to the patient. Methods: The acquired EPID images were corrected for scatter caused by the patient and the EPID with an iterative reconstruction algorithm. The transmitted photon fluence behind the patient was calculated subsequently. Based on the transmitted fluence the initial photon fluence was calculated using a back-projection algorithm which takes the patient geometry and its energy dependent linear attenuation into account. This attenuation was gained from the acquired cone-beam CT or the planning CT by calculating a water-equivalent radiological thickness for each irradiation direction. To verify the algorithm an inhomogeneous phantom consisting of three inhomogeneities was irradiated by a static 6 MV photon field and compared to a reference flood field image. Results: The mean deviation between the reconstructed relative photon fluence for the inhomogeneous phantom and the flood field EPID image was 3% rising up to 7% for off-axis fluence. This was probably caused by the used clinical EPID calibration, which flattens the inhomogeneous fluence profile of the beam. Conclusion: In this clinical experiment the algorithm achieved good results in the center of the field while it showed high deviation of the lateral fluence. This could be reduced by optimizing the EPID calibration, considering the off-axis differential energy response. In further progress this and other aspects of the EPID, eg. field size dependency, CT and dose calibration have to be studied to realize a clinical acceptable accuracy of 2%.

  11. Predicting fuelwood prices in Greece with the use of ARIMA models, artificial neural networks and a hybrid ARIMA-ANN model

    International Nuclear Information System (INIS)

    Koutroumanidis, Theodoros; Ioannou, Konstantinos; Arabatzis, Garyfallos

    2009-01-01

    Throughout history, energy resources have acquired a strategic significance for the economic growth and social welfare of any country. The large-scale oil crisis of 1973 coupled with various environmental protection issues, have led many countries to look for new, alternative energy sources. Biomass and fuelwood in particular, constitutes a major renewable energy source (RES) that can make a significant contribution, as a substitute for oil. This paper initially provides a description of the contribution of renewable energy sources to the production of electricity, and also examines the role of forests in the production of fuelwood in Greece. Following this, autoregressive integrated moving average (ARIMA) models, artificial neural networks (ANN) and a hybrid model are used to predict the future selling prices of the fuelwood (from broadleaved and coniferous species) produced by Greek state forest farms. The use of the ARIMA-ANN hybrid model provided the optimum prediction results, thus enabling decision-makers to proceed with a more rational planning for the production and fuelwood market. (author)

  12. Rapid Identification of Asteraceae Plants with Improved RBF-ANN Classification Models Based on MOS Sensor E-Nose

    Directory of Open Access Journals (Sweden)

    Hui-Qin Zou

    2014-01-01

    Full Text Available Plants from Asteraceae family are widely used as herbal medicines and food ingredients, especially in Asian area. Therefore, authentication and quality control of these different Asteraceae plants are important for ensuring consumers’ safety and efficacy. In recent decades, electronic nose (E-nose has been studied as an alternative approach. In this paper, we aim to develop a novel discriminative model by improving radial basis function artificial neural network (RBF-ANN classification model. Feature selection algorithms, including principal component analysis (PCA and BestFirst + CfsSubsetEval (BC, were applied in the improvement of RBF-ANN models. Results illustrate that in the improved RBF-ANN models with lower dimension data classification accuracies (100% remained the same as in the original model with higher-dimension data. It is the first time to introduce feature selection methods to get valuable information on how to attribute more relevant MOS sensors; namely, in this case, S1, S3, S4, S6, and S7 show better capability to distinguish these Asteraceae plants. This paper also gives insights to further research in this area, for instance, sensor array optimization and performance improvement of classification model.

  13. Voting based object boundary reconstruction

    Science.gov (United States)

    Tian, Qi; Zhang, Like; Ma, Jingsheng

    2005-07-01

    A voting-based object boundary reconstruction approach is proposed in this paper. Morphological technique was adopted in many applications for video object extraction to reconstruct the missing pixels. However, when the missing areas become large, the morphological processing cannot bring us good results. Recently, Tensor voting has attracted people"s attention, and it can be used for boundary estimation on curves or irregular trajectories. However, the complexity of saliency tensor creation limits its applications in real-time systems. An alternative approach based on tensor voting is introduced in this paper. Rather than creating saliency tensors, we use a "2-pass" method for orientation estimation. For the first pass, Sobel d*etector is applied on a coarse boundary image to get the gradient map. In the second pass, each pixel puts decreasing weights based on its gradient information, and the direction with maximum weights sum is selected as the correct orientation of the pixel. After the orientation map is obtained, pixels begin linking edges or intersections along their direction. The approach is applied to various video surveillance clips under different conditions, and the experimental results demonstrate significant improvement on the final extracted objects accuracy.

  14. Split-Bregman-based sparse-view CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Vandeghinste, Bert; Vandenberghe, Stefaan [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Goossens, Bart; Pizurica, Aleksandra; Philips, Wilfried [Ghent Univ. (Belgium). Image Processing and Interpretation Research Group (IPI); Beenhouwer, Jan de [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Antwerp Univ., Wilrijk (Belgium). The Vision Lab; Staelens, Steven [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Antwerp Univ., Edegem (Belgium). Molecular Imaging Centre Antwerp

    2011-07-01

    Total variation minimization has been extensively researched for image denoising and sparse view reconstruction. These methods show superior denoising performance for simple images with little texture, but result in texture information loss when applied to more complex images. It could thus be beneficial to use other regularizers within medical imaging. We propose a general regularization method, based on a split-Bregman approach. We show results for this framework combined with a total variation denoising operator, in comparison to ASD-POCS. We show that sparse-view reconstruction and noise regularization is possible. This general method will allow us to investigate other regularizers in the context of regularized CT reconstruction, and decrease the acquisition times in {mu}CT. (orig.)

  15. Error Evaluation in a Stereovision-Based 3D Reconstruction System

    Directory of Open Access Journals (Sweden)

    Kohler Sophie

    2010-01-01

    Full Text Available The work presented in this paper deals with the performance analysis of the whole 3D reconstruction process of imaged objects, specifically of the set of geometric primitives describing their outline and extracted from a pair of images knowing their associated camera models. The proposed analysis focuses on error estimation for the edge detection process, the starting step for the whole reconstruction procedure. The fitting parameters describing the geometric features composing the workpiece to be evaluated are used as quality measures to determine error bounds and finally to estimate the edge detection errors. These error estimates are then propagated up to the final 3D reconstruction step. The suggested error analysis procedure for stereovision-based reconstruction tasks further allows evaluating the quality of the 3D reconstruction. The resulting final error estimates enable lastly to state if the reconstruction results fulfill a priori defined criteria, for example, fulfill dimensional constraints including tolerance information, for vision-based quality control applications for example.

  16. Integration of artificial intelligence methods and life cycle assessment to predict energy output and environmental impacts of paddy production.

    Science.gov (United States)

    Nabavi-Pelesaraei, Ashkan; Rafiee, Shahin; Mohtasebi, Seyed Saeid; Hosseinzadeh-Bandbafha, Homa; Chau, Kwok-Wing

    2018-08-01

    Prediction of agricultural energy output and environmental impacts play important role in energy management and conservation of environment as it can help us to evaluate agricultural energy efficiency, conduct crops production system commissioning, and detect and diagnose faults of crop production system. Agricultural energy output and environmental impacts can be readily predicted by artificial intelligence (AI), owing to the ease of use and adaptability to seek optimal solutions in a rapid manner as well as the use of historical data to predict future agricultural energy use pattern under constraints. This paper conducts energy output and environmental impact prediction of paddy production in Guilan province, Iran based on two AI methods, artificial neural networks (ANNs), and adaptive neuro fuzzy inference system (ANFIS). The amounts of energy input and output are 51,585.61MJkg -1 and 66,112.94MJkg -1 , respectively, in paddy production. Life Cycle Assessment (LCA) is used to evaluate environmental impacts of paddy production. Results show that, in paddy production, in-farm emission is a hotspot in global warming, acidification and eutrophication impact categories. ANN model with 12-6-8-1 structure is selected as the best one for predicting energy output. The correlation coefficient (R) varies from 0.524 to 0.999 in training for energy input and environmental impacts in ANN models. ANFIS model is developed based on a hybrid learning algorithm, with R for predicting output energy being 0.860 and, for environmental impacts, varying from 0.944 to 0.997. Results indicate that the multi-level ANFIS is a useful tool to managers for large-scale planning in forecasting energy output and environmental indices of agricultural production systems owing to its higher speed of computation processes compared to ANN model, despite ANN's higher accuracy. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Ado Vabbe preemia Anne Parmastole

    Index Scriptorium Estoniae

    2003-01-01

    Tartu Kunstimajas Tartu kunsti aastalõpunäitus. Kujundaja Mari Nõmmela. Anne Parmastole A. Vabbe, Silja Salmistule E-Kunstisalongi, Lii Jürgensonile EDA, Jüri Marranile Wilde kohviku, Sami Makkonenile AS Vunder ja Tartu Õlletehase A. Le Coq ning Eda Lõhmusele AS Merko Tartu preemia

  18. Right adrenal vein: comparison between adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Science.gov (United States)

    Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M

    2018-06-01

    To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (pASiR (pASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  19. Enhancement of RWSN Lifetime via Firework Clustering Algorithm Validated by ANN

    Directory of Open Access Journals (Sweden)

    Ahmad Ali

    2018-03-01

    Full Text Available Nowadays, wireless power transfer is ubiquitously used in wireless rechargeable sensor networks (WSNs. Currently, the energy limitation is a grave concern issue for WSNs. However, lifetime enhancement of sensor networks is a challenging task need to be resolved. For addressing this issue, a wireless charging vehicle is an emerging technology to expand the overall network efficiency. The present study focuses on the enhancement of overall network lifetime of the rechargeable wireless sensor network. To resolve the issues mentioned above, we propose swarm intelligence based hard clustering approach using fireworks algorithm with the adaptive transfer function (FWA-ATF. In this work, the virtual clustering method has been applied in the routing process which utilizes the firework optimization algorithm. Still now, an FWA-ATF algorithm yet not applied by any researcher for RWSN. Furthermore, the validation study of the proposed method using the artificial neural network (ANN backpropagation algorithm incorporated in the present study. Different algorithms are applied to evaluate the performance of proposed technique that gives the best results in this mechanism. Numerical results indicate that our method outperforms existing methods and yield performance up to 80% regarding energy consumption and vacation time of wireless charging vehicle.

  20. Time-based Reconstruction of Free-streaming Data in CBM

    Science.gov (United States)

    Akishina, Valentina; Kisel, Ivan; Vassiliev, Iouri; Zyzak, Maksym

    2018-02-01

    Traditional latency-limited trigger architectures typical for conventional experiments are inapplicable for the CBM experiment. Instead, CBM will ship and collect time-stamped data into a readout buffer in a form of a time-slice of a certain length and deliver it to a large computer farm, where online event reconstruction and selection will be performed. Grouping measurements into physical collisions must be performed in software and requires reconstruction not only in space, but also in time, the so-called 4-dimensional track reconstruction and event building. The tracks, reconstructed with 4D Cellular Automaton track finder, are combined into event-corresponding clusters according to the estimated time in the target position and the errors, obtained with the Kalman Filter method. The reconstructed events are given as inputs to the KF Particle Finder package for short-lived particle reconstruction. The results of time-based reconstruction of simulated collisions in CBM are presented and discussed in details.

  1. Anthropometric body measurements based on multi-view stereo image reconstruction.

    Science.gov (United States)

    Li, Zhaoxin; Jia, Wenyan; Mao, Zhi-Hong; Li, Jie; Chen, Hsin-Chen; Zuo, Wangmeng; Wang, Kuanquan; Sun, Mingui

    2013-01-01

    Anthropometric measurements, such as the circumferences of the hip, arm, leg and waist, waist-to-hip ratio, and body mass index, are of high significance in obesity and fitness evaluation. In this paper, we present a home based imaging system capable of conducting anthropometric measurements. Body images are acquired at different angles using a home camera and a simple rotating disk. Advanced image processing algorithms are utilized for 3D body surface reconstruction. A coarse body shape model is first established from segmented body silhouettes. Then, this model is refined through an inter-image consistency maximization process based on an energy function. Our experimental results using both a mannequin surrogate and a real human body validate the feasibility of the proposed system.

  2. Precise shape reconstruction by active pattern in total-internal-reflection-based tactile sensor.

    Science.gov (United States)

    Saga, Satoshi; Taira, Ryosuke; Deguchi, Koichiro

    2014-03-01

    We are developing a total-internal-reflection-based tactile sensor in which the shape is reconstructed using an optical reflection. This sensor consists of silicone rubber, an image pattern, and a camera. It reconstructs the shape of the sensor surface from an image of a pattern reflected at the inner sensor surface by total internal reflection. In this study, we propose precise real-time reconstruction by employing an optimization method. Furthermore, we propose to use active patterns. Deformation of the reflection image causes reconstruction errors. By controlling the image pattern, the sensor reconstructs the surface deformation more precisely. We implement the proposed optimization and active-pattern-based reconstruction methods in a reflection-based tactile sensor, and perform reconstruction experiments using the system. A precise deformation experiment confirms the linearity and precision of the reconstruction.

  3. Reconstructing Space- and Energy-Dependent Exciton Generation in Solution-Processed Inverted Organic Solar Cells.

    Science.gov (United States)

    Wang, Yuheng; Zhang, Yajie; Lu, Guanghao; Feng, Xiaoshan; Xiao, Tong; Xie, Jing; Liu, Xiaoyan; Ji, Jiahui; Wei, Zhixiang; Bu, Laju

    2018-04-25

    Photon absorption-induced exciton generation plays an important role in determining the photovoltaic properties of donor/acceptor organic solar cells with an inverted architecture. However, the reconstruction of light harvesting and thus exciton generation at different locations within organic inverted device are still not well resolved. Here, we investigate the film depth-dependent light absorption spectra in a small molecule donor/acceptor film. Including depth-dependent spectra into an optical transfer matrix method allows us to reconstruct both film depth- and energy-dependent exciton generation profiles, using which short-circuit current and external quantum efficiency of the inverted device are simulated and compared with the experimental measurements. The film depth-dependent spectroscopy, from which we are able to simultaneously reconstruct light harvesting profile, depth-dependent composition distribution, and vertical energy level variations, provides insights into photovoltaic process. In combination with appropriate material processing methods and device architecture, the method proposed in this work will help optimizing film depth-dependent optical/electronic properties for high-performance solar cells.

  4. Energy, economic and environmental performance simulation of a hybrid renewable microgeneration system with neural network predictive control

    Directory of Open Access Journals (Sweden)

    Evgueniy Entchev

    2018-03-01

    Full Text Available The use of artificial neural networks (ANNs in various applications has grown significantly over the years. This paper compares an ANN based approach with a conventional on-off control applied to the operation of a ground source heat pump/photovoltaic thermal system serving a single house located in Ottawa (Canada for heating and cooling purposes. The hybrid renewable microgeneration system was investigated using the dynamic simulation software TRNSYS. A controller for predicting the future room temperature was developed in the MATLAB environment and six ANN control logics were analyzed.The comparison was performed in terms of ability to maintain the desired indoor comfort levels, primary energy consumption, operating costs and carbon dioxide equivalent emissions during a week of the heating period and a week of the cooling period. The results showed that the ANN approach is potentially able to alleviate the intensity of thermal discomfort associated with overheating/overcooling phenomena, but it could cause an increase in unmet comfort hours. The analysis also highlighted that the ANNs based strategies could reduce the primary energy consumption (up to around 36%, the operating costs (up to around 81% as well as the carbon dioxide equivalent emissions (up to around 36%. Keywords: Hybrid microgeneration system, Ground source heat pump, Photovoltaic thermal, Artificial neural network, Predictive control, Energy saving

  5. Intelligent MRTD testing for thermal imaging system using ANN

    Science.gov (United States)

    Sun, Junyue; Ma, Dongmei

    2006-01-01

    The Minimum Resolvable Temperature Difference (MRTD) is the most widely accepted figure for describing the performance of a thermal imaging system. Many models have been proposed to predict it. The MRTD testing is a psychophysical task, for which biases are unavoidable. It requires laboratory conditions such as normal air condition and a constant temperature. It also needs expensive measuring equipments and takes a considerable period of time. Especially when measuring imagers of the same type, the test is time consuming. So an automated and intelligent measurement method should be discussed. This paper adopts the concept of automated MRTD testing using boundary contour system and fuzzy ARTMAP, but uses different methods. It describes an Automated MRTD Testing procedure basing on Back-Propagation Network. Firstly, we use frame grabber to capture the 4-bar target image data. Then according to image gray scale, we segment the image to get 4-bar place and extract feature vector representing the image characteristic and human detection ability. These feature sets, along with known target visibility, are used to train the ANN (Artificial Neural Networks). Actually it is a nonlinear classification (of input dimensions) of the image series using ANN. Our task is to justify if image is resolvable or uncertainty. Then the trained ANN will emulate observer performance in determining MRTD. This method can reduce the uncertainties between observers and long time dependent factors by standardization. This paper will introduce the feature extraction algorithm, demonstrate the feasibility of the whole process and give the accuracy of MRTD measurement.

  6. Amplitude-based data selection for optimal retrospective reconstruction in micro-SPECT

    Science.gov (United States)

    Breuilly, M.; Malandain, G.; Guglielmi, J.; Marsault, R.; Pourcher, T.; Franken, P. R.; Darcourt, J.

    2013-04-01

    Respiratory motion can blur the tomographic reconstruction of positron emission tomography or single-photon emission computed tomography (SPECT) images, which subsequently impair quantitative measurements, e.g. in the upper abdomen area. Respiratory signal phase-based gated reconstruction addresses this problem, but deteriorates the signal-to-noise ratio (SNR) and other intensity-based quality measures. This paper proposes a 3D reconstruction method dedicated to micro-SPECT imaging of mice. From a 4D acquisition, the phase images exhibiting motion are identified and the associated list-mode data are discarded, which enables the reconstruction of a 3D image without respiratory artefacts. The proposed method allows a motion-free reconstruction exhibiting both satisfactory count statistics and accuracy of measures. With respect to standard 3D reconstruction (non-gated 3D reconstruction) without breathing motion correction, an increase of 14.6% of the mean standardized uptake value has been observed, while, with respect to a gated 4D reconstruction, up to 60% less noise and an increase of up to 124% of the SNR have been demonstrated.

  7. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation

    International Nuclear Information System (INIS)

    Jia Xun; Lou Yifei; Li Ruijiang; Song, William Y.; Jiang, Steve B.

    2010-01-01

    Purpose: Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. Methods: The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. Results: It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of ∼360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. Conclusions: This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  8. GPU-based fast cone beam CT reconstruction from undersampled and noisy projection data via total variation.

    Science.gov (United States)

    Jia, Xun; Lou, Yifei; Li, Ruijiang; Song, William Y; Jiang, Steve B

    2010-04-01

    Cone-beam CT (CBCT) plays an important role in image guided radiation therapy (IGRT). However, the large radiation dose from serial CBCT scans in most IGRT procedures raises a clinical concern, especially for pediatric patients who are essentially excluded from receiving IGRT for this reason. The goal of this work is to develop a fast GPU-based algorithm to reconstruct CBCT from undersampled and noisy projection data so as to lower the imaging dose. The CBCT is reconstructed by minimizing an energy functional consisting of a data fidelity term and a total variation regularization term. The authors developed a GPU-friendly version of the forward-backward splitting algorithm to solve this model. A multigrid technique is also employed. It is found that 20-40 x-ray projections are sufficient to reconstruct images with satisfactory quality for IGRT. The reconstruction time ranges from 77 to 130 s on an NVIDIA Tesla C1060 (NVIDIA, Santa Clara, CA) GPU card, depending on the number of projections used, which is estimated about 100 times faster than similar iterative reconstruction approaches. Moreover, phantom studies indicate that the algorithm enables the CBCT to be reconstructed under a scanning protocol with as low as 0.1 mA s/projection. Comparing with currently widely used full-fan head and neck scanning protocol of approximately 360 projections with 0.4 mA s/projection, it is estimated that an overall 36-72 times dose reduction has been achieved in our fast CBCT reconstruction algorithm. This work indicates that the developed GPU-based CBCT reconstruction algorithm is capable of lowering imaging dose considerably. The high computation efficiency in this algorithm makes the iterative CBCT reconstruction approach applicable in real clinical environments.

  9. Pre-reconstruction dual-energy, X-ray computerized tomography (CT): theory, implementation, results, and clinical use

    International Nuclear Information System (INIS)

    Oravez, W.T.

    1986-01-01

    For the task of bone mineral measurement, single-energy quantitative CT has demonstrated its worth in terms of precision for most longitudinal clinical studies. However, for cross-sectional clinical studies, known inaccuracy exists due to less than robust beam-hardening corrections, and negatively biased bone mineral measurement, due to the effect of unknown variable concentration of bone marrow fat within the metabolically active trabecular bone space. A dual-energy measurement technique provides a solution to these deficiencies of single-energy measurements. The fundamental theory of dual-energy measurement techniques is based on a Compton-photoelectric approximation and the mixture rule for the total attenuation coefficient. Resolution of atomic composition and electron density components of attenuation should then be possible. To take full advantage of these principles, the raw dual-energy projection values are operated on before reconstruction. This method beam-hardening and composition-selective imaging. Rapid kilovoltage switching between projection measurements, rather than serial measurements, assures the best measurement quality

  10. Energy Reconstruction for Events Detected in TES X-ray Detectors

    Science.gov (United States)

    Ceballos, M. T.; Cardiel, N.; Cobo, B.

    2015-09-01

    The processing of the X-ray events detected by a TES (Transition Edge Sensor) device (such as the one that will be proposed in the ESA AO call for instruments for the Athena mission (Nandra et al. 2013) as a high spectral resolution instrument, X-IFU (Barret et al. 2013)), is a several step procedure that starts with the detection of the current pulses in a noisy signal and ends up with their energy reconstruction. For this last stage, an energy calibration process is required to convert the pseudo energies measured in the detector to the real energies of the incoming photons, accounting for possible nonlinearity effects in the detector. We present the details of the energy calibration algorithm we implemented as the last part of the Event Processing software that we are developing for the X-IFU instrument, that permits the calculation of the calibration constants in an analytical way.

  11. Neutrino energy reconstruction from one-muon and one-proton events

    Energy Technology Data Exchange (ETDEWEB)

    Furmanski, Andrew P.; Sobczyk, Jan T.

    2017-06-01

    We propose a method of selecting a high-purity sample of charged current quasielastic neutrino interactions to obtain a precise reconstruction of the neutrino energy. The performance of the method was verified with several tests using genie, neut, and nuwro Monte Carlo event generators with both carbon and argon targets. The method can be useful in neutrino oscillation studies with beams of a few GeV.

  12. Harmonic analysis in integrated energy system based on compressed sensing

    International Nuclear Information System (INIS)

    Yang, Ting; Pen, Haibo; Wang, Dan; Wang, Zhaoxia

    2016-01-01

    Highlights: • We propose a harmonic/inter-harmonic analysis scheme with compressed sensing theory. • Property of sparseness of harmonic signal in electrical power system is proved. • The ratio formula of fundamental and harmonic components sparsity is presented. • Spectral Projected Gradient-Fundamental Filter reconstruction algorithm is proposed. • SPG-FF enhances the precision of harmonic detection and signal reconstruction. - Abstract: The advent of Integrated Energy Systems enabled various distributed energy to access the system through different power electronic devices. The development of this has made the harmonic environment more complex. It needs low complexity and high precision of harmonic detection and analysis methods to improve power quality. To solve the shortages of large data storage capacities and high complexity of compression in sampling under the Nyquist sampling framework, this research paper presents a harmonic analysis scheme based on compressed sensing theory. The proposed scheme enables the performance of the functions of compressive sampling, signal reconstruction and harmonic detection simultaneously. In the proposed scheme, the sparsity of the harmonic signals in the base of the Discrete Fourier Transform (DFT) is numerically calculated first. This is followed by providing a proof of the matching satisfaction of the necessary conditions for compressed sensing. The binary sparse measurement is then leveraged to reduce the storage space in the sampling unit in the proposed scheme. In the recovery process, the scheme proposed a novel reconstruction algorithm called the Spectral Projected Gradient with Fundamental Filter (SPG-FF) algorithm to enhance the reconstruction precision. One of the actual microgrid systems is used as simulation example. The results of the experiment shows that the proposed scheme effectively enhances the precision of harmonic and inter-harmonic detection with low computing complexity, and has good

  13. Jo Ann Baumgartner and Sam Earnshaw: Organizers and Farmers

    OpenAIRE

    Rabkin, Sarah

    2010-01-01

    Jo Ann Baumgartner directs the Wild Farm Alliance, based in Watsonville, California. WFA’s mission, as described on the organization’s website, is “to promote agriculture that helps to protect and restore wild Nature.” Through research, publications, presentations, events, policy work, and consulting, the organization works to “connect food systems with ecosystems.” Sam Earnshaw is Central Coast regional coordinator of the Community Alliance with Family Farmers. Working with CAFF’s f...

  14. Data-Driven Modeling of Complex Systems by means of a Dynamical ANN

    Science.gov (United States)

    Seleznev, A.; Mukhin, D.; Gavrilov, A.; Loskutov, E.; Feigin, A.

    2017-12-01

    The data-driven methods for modeling and prognosis of complex dynamical systems become more and more popular in various fields due to growth of high-resolution data. We distinguish the two basic steps in such an approach: (i) determining the phase subspace of the system, or embedding, from available time series and (ii) constructing an evolution operator acting in this reduced subspace. In this work we suggest a novel approach combining these two steps by means of construction of an artificial neural network (ANN) with special topology. The proposed ANN-based model, on the one hand, projects the data onto a low-dimensional manifold, and, on the other hand, models a dynamical system on this manifold. Actually, this is a recurrent multilayer ANN which has internal dynamics and capable of generating time series. Very important point of the proposed methodology is the optimization of the model allowing us to avoid overfitting: we use Bayesian criterion to optimize the ANN structure and estimate both the degree of evolution operator nonlinearity and the complexity of nonlinear manifold which the data are projected on. The proposed modeling technique will be applied to the analysis of high-dimensional dynamical systems: Lorenz'96 model of atmospheric turbulence, producing high-dimensional space-time chaos, and quasi-geostrophic three-layer model of the Earth's atmosphere with the natural orography, describing the dynamics of synoptical vortexes as well as mesoscale blocking systems. The possibility of application of the proposed methodology to analyze real measured data is also discussed. The study was supported by the Russian Science Foundation (grant #16-12-10198).

  15. Anne Veesaar astus Valgas üles uudses rollis / Jaan Rapp

    Index Scriptorium Estoniae

    Rapp, Jaan

    2010-01-01

    2010. aasta iga kuu viimasel reedel esitab näitlejanna Anne Veesaar Raadio Ruudus katkendeid Valgamaa kirjanike loomingust. 14. jaanuaril kohtumisel lugejatega rääkis Valgas sündinud näitlejanna oma elulooraamatust "Anne Veesaar : elus, see on kõige tähtsam", mille on kirja pannud Helen Eelrand, ja oma praegustest tegemistest

  16. Quartet-based methods to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Xu, Yifei; Wan, Xiu-Feng

    2014-02-20

    Phylogenetic networks are employed to visualize evolutionary relationships among a group of nucleotide sequences, genes or species when reticulate events like hybridization, recombination, reassortant and horizontal gene transfer are believed to be involved. In comparison to traditional distance-based methods, quartet-based methods consider more information in the reconstruction process and thus have the potential to be more accurate. We introduce QuartetSuite, which includes a set of new quartet-based methods, namely QuartetS, QuartetA, and QuartetM, to reconstruct phylogenetic networks from nucleotide sequences. We tested their performances and compared them with other popular methods on two simulated nucleotide sequence data sets: one generated from a tree topology and the other from a complicated evolutionary history containing three reticulate events. We further validated these methods to two real data sets: a bacterial data set consisting of seven concatenated genes of 36 bacterial species and an influenza data set related to recently emerging H7N9 low pathogenic avian influenza viruses in China. QuartetS, QuartetA, and QuartetM have the potential to accurately reconstruct evolutionary scenarios from simple branching trees to complicated networks containing many reticulate events. These methods could provide insights into the understanding of complicated biological evolutionary processes such as bacterial taxonomy and reassortant of influenza viruses.

  17. Kõnelused Tartus / Anne Untera

    Index Scriptorium Estoniae

    Untera, Anne, 1951-

    2007-01-01

    8.-10. V Tartus toimunud eesti, läti ja saksa kunstiteadlaste ühisseminarist. Alexander Knorre rääkis Karl August Senffi, Ilona Audere Friedrich Ludwig von Maydelli, Mai Levin Karl Alexander von Winkleri, Kristiana Abele Johann Walter-Kurau (1869-1932), Anne Untera Konstantin ja Sally von Kügelgeni, Epp Preem Julie Hagen-Schwartzi, Friedrich Gross Eduard von Gebhardti ja Katharina Hadding Ida Kerkoviuse (1879-1970) loomingust

  18. Solar radiation modelling using ANNs for different climates in China

    International Nuclear Information System (INIS)

    Lam, Joseph C.; Wan, Kevin K.W.; Yang, Liu

    2008-01-01

    Artificial neural networks (ANNs) were used to develop prediction models for daily global solar radiation using measured sunshine duration for 40 cities covering nine major thermal climatic zones and sub-zones in China. Coefficients of determination (R 2 ) for all the 40 cities and nine climatic zones/sub-zones are 0.82 or higher, indicating reasonably strong correlation between daily solar radiation and the corresponding sunshine hours. Mean bias error (MBE) varies from -3.3 MJ/m 2 in Ruoqiang (cold climates) to 2.19 MJ/m 2 in Anyang (cold climates). Root mean square error (RMSE) ranges from 1.4 MJ/m 2 in Altay (severe cold climates) to 4.01 MJ/m 2 in Ruoqiang. The three principal statistics (i.e., R 2 , MBE and RMSE) of the climatic zone/sub-zone ANN models are very close to the corresponding zone/sub-zone averages of the individual city ANN models, suggesting that climatic zone ANN models could be used to estimate global solar radiation for locations within the respective zones/sub-zones where only measured sunshine duration data are available. (author)

  19. Prospective regularization design in prior-image-based reconstruction

    International Nuclear Information System (INIS)

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-01-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  20. Design of an artificial neural network, with the topology oriented to the reconstruction of neutron spectra; Diseno de una red neuronal artificial, con la topologia orientada a la reconstruccion del espectro de neutrones

    Energy Technology Data Exchange (ETDEWEB)

    Arteaga A, T.; Ortiz R, J.M.; Vega C, H.R.; Hernandez D, V.M.; Manzanares A, E.; Mercado S, G.A. [Unidades Academicas de Estudios Nucleares, Ingenieria Electrica y Matematicas, Universidad de Zacatecas, A.P. 336, 98000 Zacatecas (Mexico)]. e-mail: tarcicio70@yahoo.co.uk

    2006-07-01

    People that live in high places respect to the sea level, in latitudes far from the equator or that they travel by plane, they are exposed to atmospheres of high radiation generated by the cosmic rays. Another atmosphere with radiation is the medical equipment, particle accelerators and nuclear reactors. The evaluation of the biological risk for neutron radiation requires an appropriate and sure dosimetry. A commonly used system is the Bonner Sphere Spectrometer (EEB) with the purpose of reconstructing the spectrum that is important because the equivalent dose for neutrons depends strongly on its energy. The count rates obtained in each sphere are treated, in most of the cases, for iterative methods, Monte Carlo or Maximum Entropy. Each one of them has difficulties that it motivates to the development of complementary procedures. Recently it has been used Artificial Neural Networks, ANN) and not yet conclusive results have been obtained. In this work it was designed an ANN to obtain the neutron energy spectrum neutrons starting from the counting rate of count of an EEB. The ANN was trained with 129 reference spectra obtained of the IAEA (1990, 2001), 24 were built as defined energy, including isotopic sources of neutrons of reference and operational, of accelerators, reactors, mathematical functions, and of defined energy with several peaks. The spectrum was transformed from lethargy units to energy and were reaccommodated in 31 energies using the Monte Carlo code 4C. The reaccommodated spectra and the response matrix UTA4 were used to calculate the prospective count rates in the EEB. These rates were used as entrance and its respective spectrum was used as output during the net training. The net design is Retropropagation type with 5 layers of 7, 140, 140, 140 and 31 neurons, transfer function logsig, tansig, logsig, logsig, logsig respectively. Training algorithm, traingdx. After the training, the net was proven with a group of training spectra and others that

  1. Perforator based rectus free tissue transfer for head and neck reconstruction: New reconstructive advantages from an old friend.

    Science.gov (United States)

    Kang, Stephen Y; Spector, Matthew E; Chepeha, Douglas B

    2017-11-01

    To demonstrate three reconstructive advantages of the perforator based rectus free tissue transfer: long pedicle, customizable adipose tissue, and volume reconstruction without muscle atrophy within a contained space. Thirty patients with defects of the head and neck were reconstructed with the perforator based rectus free tissue transfer. Transplant success was 93%. Mean pedicle length was 13.4cm. Eleven patients (37%) had vessel-poor necks and the long pedicle provided by this transplant avoided the need for vein grafts in these patients. Adipose tissue was molded in 17 patients (57%). Twenty-five patients (83%) had defects within a contained space, such as the orbit, where it was critical to have a transplant that avoided muscle atrophy. The perforator based rectus free tissue transfer provides a long pedicle, moldable fat for flap customization, and is useful in reconstruction of defects within a contained space where volume loss due to muscle atrophy is prevented. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    Science.gov (United States)

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p ASiR was 2 (p ASiR (p ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  3. Integrated IDA–ANN–DEA for assessment and optimization of energy consumption in industrial sectors

    International Nuclear Information System (INIS)

    Olanrewaju, O.A.; Jimoh, A.A.; Kholopane, P.A.

    2012-01-01

    This paper puts forward an integrated approach, based on logarithmic mean divisia index (LMDI) – an index decomposition analysis (IDA) method, an artificial neural network (ANN) and a data envelopment analysis (DEA) for the analysis of total energy efficiency and optimization in an industrial sector. The energy efficiency assessment and the optimization of the proposed model use LMDI to decompose energy consumption into activity, structural and intensity indicators, which serve as inputs to the ANN. The ANN model is verified and validated by performing a linear regression comparison between the specifically measured energy consumption and the corresponding predicted energy consumption. The proposed approach utilizes the measure-specific, super-efficient DEA model for sensitivity analysis to determine the critical measured energy consumption and its optimization reductions. The proposed method is validated by its application to determine the efficiency computation and an analysis of historical data as well as the prediction and optimization capability of the Canadian industrial sector. -- Highlights: ► An integrated IDA–ANN–DEA model for energy management is proposed. ► The model relies on aggregate energy and GDP data. ► The model explains how energy can be managed in the Canadian Industrial sector.

  4. Reconstruction of inclined shower coordinates in electromagnetic calorimeters based on lead glass

    International Nuclear Information System (INIS)

    Vasil'ev, A.N.; Mochalov, V.V.; Solov'ev, L.F.

    2007-01-01

    A method for reconstructing the coordinates of inclined showers in lead glass electromagnetic calorimeters is described. Such showers are generated by photons with energies of 0.5-4.0 GeV that are incident on the detector at angles of as great as 30 deg. An analytical expression for the description of the actual photon coordinate in the calorimeter versus the coordinates of the shower center of gravity is proposed. Using this expression, it is possible to reconstruct the coordinates of inclined electromagnetic showers over wide ranges of angles and energies. The dependences of the spatial resolution on the photon energy and angle are determined. The longitudinal fluctuations of the shower length and their effect on the spatial resolution of the calorimeter are discussed [ru

  5. Design of an Experiment to Measure ann Using 3H(γ, pn)n at HIγS★

    Science.gov (United States)

    Friesen, F. Q. L.; Ahmed, M. W.; Crowe, B. J.; Crowell, A. S.; Cumberbatch, L. C.; Fallin, B.; Han, Z.; Howell, C. R.; Malone, R. M.; Markoff, D.; Tornow, W.; Witała, H.

    2016-03-01

    We provide an update on the development of an experiment at TUNL for determining the 1S0 neutron-neutron (nn) scattering length (ann) from differential cross-section measurements of three-body photodisintegration of the triton. The experiment will be conducted using a linearly polarized gamma-ray beam at the High Intensity Gamma-ray Source (HIγS) and tritium gas contained in thin-walled cells. The main components of the planned experiment are a 230 Ci gas target system, a set of wire chambers and silicon strip detectors on each side of the beam axis, and an array of neutron detectors on each side beyond the silicon detectors. The protons emitted in the reaction are tracked in the wire chambers and their energy and position are measured in silicon strip detectors. The first iteration of the experiment will be simplified, making use of a collimator system, and silicon detectors to interrogate the main region of interest near 90° in the polar angle. Monte-Carlo simulations based on rigorous 3N calculations have been conducted to validate the sensitivity of the experimental setup to ann. This research supported in part by the DOE Office of Nuclear Physics Grant Number DE-FG02-97ER41033

  6. On-line dynamic monitoring automotive exhausts: using BP-ANN for distinguishing multi-components

    Science.gov (United States)

    Zhao, Yudi; Wei, Ruyi; Liu, Xuebin

    2017-10-01

    Remote sensing-Fourier Transform infrared spectroscopy (RS-FTIR) is one of the most important technologies in atmospheric pollutant monitoring. It is very appropriate for on-line dynamic remote sensing monitoring of air pollutants, especially for the automotive exhausts. However, their absorption spectra are often seriously overlapped in the atmospheric infrared window bands, i.e. MWIR (3 5μm). Artificial Neural Network (ANN) is an algorithm based on the theory of the biological neural network, which simplifies the partial differential equation with complex construction. For its preferable performance in nonlinear mapping and fitting, in this paper we utilize Back Propagation-Artificial Neural Network (BP-ANN) to quantitatively analyze the concentrations of four typical industrial automotive exhausts, including CO, NO, NO2 and SO2. We extracted the original data of these automotive exhausts from the HITRAN database, most of which virtually overlapped, and established a mixed multi-component simulation environment. Based on Beer-Lambert Law, concentrations can be retrieved from the absorbance of spectra. Parameters including learning rate, momentum factor, the number of hidden nodes and iterations were obtained when the BP network was trained with 80 groups of input data. By improving these parameters, the network can be optimized to produce necessarily higher precision for the retrieved concentrations. This BP-ANN method proves to be an effective and promising algorithm on dealing with multi-components analysis of automotive exhausts.

  7. Artificial Neural Network (ANN) design for Hg-Se interactions and their effect on reduction of Hg uptake by radish plant

    International Nuclear Information System (INIS)

    Kumar Rohit Raj; Abhishek Kardam; Shalini Srivastava; Jyoti Kumar Arora

    2010-01-01

    The tendency of selenium to interact with heavy metals in presence of naturally occurring species has been exploited for the development of green bioremediation of toxic metals from soil using Artificial Neural Network (ANN) modeling. The cross validation of the data for the reduction in uptake of Hg(II) ions in the plant R. sativus grown in soil and sand culture in presence of selenium has been used for ANN modeling. ANN model based on the combination of back propagation and principal component analysis was able to predict the reduction in Hg uptake with a sigmoid axon transfer function. The data of fifty laboratory experimental sets were used for structuring single layer ANN model. Series of experiments resulted into the performance evaluation based on considering 20% data for testing and 20% data for cross validation at 1,500 Epoch with 0.70 momentums The Levenberg-Marquardt algorithm (LMA) was found as the best of BP algorithms with a minimum mean squared error at the eighth place of the decimal for training (MSE) and cross validation. (author)

  8. Reconstruction of the Dark Energy Equation of State from the Latest Observations

    Science.gov (United States)

    Dai, Ji-Ping; Yang, Yang; Xia, Jun-Qing

    2018-04-01

    Since the discovery of the accelerating expansion of our universe in 1998, studying the features of dark energy has remained a hot topic in modern cosmology. In the literature, dark energy is usually described by w ≡ P/ρ, where P and ρ denote its pressure and energy density. Therefore, exploring the evolution of w is the key approach to understanding dark energy. In this work, we adopt three different methods, polynomial expansion, principal component analysis, and the correlated prior method, to reconstruct w with a collection of the latest observations, including the type-Ia supernova, cosmic microwave background, large-scale structure, Hubble measurements, and baryon acoustic oscillations (BAOs), and find that the concordance cosmological constant model (w = ‑1) is still safely consistent with these observational data at the 68% confidence level. However, when we add the high-redshift BAO measurement from the Lyα forest (Lyα FB) of BOSS DR11 quasars into the calculation, there is a significant impact on the reconstruction result. In the standard ΛCDM model, since the Lyα FB data slightly prefer a negative dark energy density, in order to avoid this problem, a dark energy model with a w significantly smaller than ‑1 is needed to explain this Lyα FB data. In this work, we find the consistent conclusion that there is a strong preference for the time-evolving behavior of dark energy w at high redshifts, when including the Lyα FB data. Therefore, we think that this Lyα FB data needs to be watched carefully attention when studying the evolution of the dark energy equation of state.

  9. Environment-based pin-power reconstruction method for homogeneous core calculations

    International Nuclear Information System (INIS)

    Leroyer, H.; Brosselard, C.; Girardi, E.

    2012-01-01

    Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)

  10. LHCb jet reconstruction

    International Nuclear Information System (INIS)

    Francisco, Oscar; Rangel, Murilo; Barter, William; Bursche, Albert; Potterat, Cedric; Coco, Victor

    2012-01-01

    Full text: The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than 4 X 10 32 cm -2 s -1 and the integrated luminosity reached the value of 1,02fb -1 on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test perturbative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space ηX φ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the colorimeters are used on the LHCb experiment to create objects called particle flow objects that are used as input to anti-kt algorithm. The LHCb is specially interesting for jets studies because its η region is complementary to the others main experiments on LHC. We will present the first results of jet reconstruction using 2011 LHCb data. (author)

  11. LHCb jet reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Francisco, Oscar; Rangel, Murilo [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil); Barter, William [University of Cambridge, Cambridge (United Kingdom); Bursche, Albert [Universitat Zurich, Zurich (Switzerland); Potterat, Cedric [Universitat de Barcelona, Barcelona (Spain); Coco, Victor [Nikhef National Institute for Subatomic Physics, Amsterdam (Netherlands)

    2012-07-01

    Full text: The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than 4 X 10{sup 32} cm{sup -2}s{sup -1} and the integrated luminosity reached the value of 1,02fb{sup -1} on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test perturbative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space {eta}X {phi} and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the colorimeters are used on the LHCb experiment to create objects called particle flow objects that are used as input to anti-kt algorithm. The LHCb is specially interesting for jets studies because its {eta} region is complementary to the others main experiments on LHC. We will present the first results of jet reconstruction using 2011 LHCb data. (author)

  12. Energy-efficient ECG compression on wireless biosensors via minimal coherence sensing and weighted ℓ₁ minimization reconstruction.

    Science.gov (United States)

    Zhang, Jun; Gu, Zhenghui; Yu, Zhu Liang; Li, Yuanqing

    2015-03-01

    Low energy consumption is crucial for body area networks (BANs). In BAN-enabled ECG monitoring, the continuous monitoring entails the need of the sensor nodes to transmit a huge data to the sink node, which leads to excessive energy consumption. To reduce airtime over energy-hungry wireless links, this paper presents an energy-efficient compressed sensing (CS)-based approach for on-node ECG compression. At first, an algorithm called minimal mutual coherence pursuit is proposed to construct sparse binary measurement matrices, which can be used to encode the ECG signals with superior performance and extremely low complexity. Second, in order to minimize the data rate required for faithful reconstruction, a weighted ℓ1 minimization model is derived by exploring the multisource prior knowledge in wavelet domain. Experimental results on MIT-BIH arrhythmia database reveals that the proposed approach can obtain higher compression ratio than the state-of-the-art CS-based methods. Together with its low encoding complexity, our approach can achieve significant energy saving in both encoding process and wireless transmission.

  13. WE-FG-207B-03: Multi-Energy CT Reconstruction with Spatial Spectral Nonlocal Means Regularization

    Energy Technology Data Exchange (ETDEWEB)

    Li, B [University of Texas Southwestern Medical Center, Dallas, TX (United States); Southern Medical University, Guangzhou, Guangdong (China); Shen, C; Ouyang, L; Yang, M; Jiang, S; Jia, X [University of Texas Southwestern Medical Center, Dallas, TX (United States); Zhou, L [Southern Medical University, Guangzhou, Guangdong (China)

    2016-06-15

    Purpose: Multi-energy computed tomography (MECT) is an emerging application in medical imaging due to its ability of material differentiation and potential for molecular imaging. In MECT, image correlations at different spatial and channels exist. It is desirable to incorporate these correlations in reconstruction to improve image quality. For this purpose, this study proposes a MECT reconstruction technique that employes spatial spectral non-local means (ssNLM) regularization. Methods: We consider a kVp-switching scanning method in which source energy is rapidly switched during data acquisition. For each energy channel, this yields projection data acquired at a number of angles, whereas projection angles among channels are different. We formulate the reconstruction task as an optimziation problem. A least square term enfores data fidelity. A ssNLM term is used as regularization to encourage similarities among image patches at different spatial locations and channels. When comparing image patches at different channels, intensity difference were corrected by a transformation estimated via histogram equalization during the reconstruction process. Results: We tested our method in a simulation study with a NCAT phantom and an experimental study with a Gammex phantom. For comparison purpose, we also performed reconstructions using conjugate-gradient least square (CGLS) method and conventional NLM method that only considers spatial correlation in an image. ssNLM is able to better suppress streak artifacts. The streaks are along different projection directions in images at different channels. ssNLM discourages this dissimilarity and hence removes them. True image structures are preserved in this process. Measurements in regions of interests yield 1.1 to 3.2 and 1.5 to 1.8 times higher contrast to noise ratio than the NLM approach. Improvements over CGLS is even more profound due to lack of regularization in the CGLS method and hence amplified noise. Conclusion: The

  14. Acellular dermal matrix based nipple reconstruction: A modified technique

    Directory of Open Access Journals (Sweden)

    Raghavan Vidya

    2017-09-01

    Full Text Available Nipple areolar reconstruction (NAR has evolved with the advancement in breast reconstruction and can improve self-esteem and, consequently, patient satisfaction. Although a variety of reconstruction techniques have been described in the literature varying from nipple sharing, local flaps to alloplastic and allograft augmentation, over time, loss of nipple projection remains a major problem. Acellular dermal matrices (ADM have revolutionised breast reconstruction more recently. We discuss the use of ADM to act as a base plate and strut to give support to the base and offer nipple bulk and projection in a primary procedure of NAR with a local clover shaped dermal flap in 5 breasts (4 patients. We used 5-point Likert scales (1 = highly unsatisfied, 5 = highly satisfied to assess patient satisfaction. Median age was 46 years (range: 38–55 years. Nipple projection of 8 mm, 7 mm, and 7 mms were achieved in the unilateral cases and 6 mm in the bilateral case over a median 18 month period. All patients reported at least a 4 on the Likert scale. We had no post-operative complications. It seems that nipple areolar reconstruction [NAR] using ADM can achieve nipple projection which is considered aesthetically pleasing for patients.

  15. A novel post-processing scheme for two-dimensional electrical impedance tomography based on artificial neural networks.

    Directory of Open Access Journals (Sweden)

    Sébastien Martin

    Full Text Available Electrical Impedance Tomography (EIT is a powerful non-invasive technique for imaging applications. The goal is to estimate the electrical properties of living tissues by measuring the potential at the boundary of the domain. Being safe with respect to patient health, non-invasive, and having no known hazards, EIT is an attractive and promising technology. However, it suffers from a particular technical difficulty, which consists of solving a nonlinear inverse problem in real time. Several nonlinear approaches have been proposed as a replacement for the linear solver, but in practice very few are capable of stable, high-quality, and real-time EIT imaging because of their very low robustness to errors and inaccurate modeling, or because they require considerable computational effort.In this paper, a post-processing technique based on an artificial neural network (ANN is proposed to obtain a nonlinear solution to the inverse problem, starting from a linear solution. While common reconstruction methods based on ANNs estimate the solution directly from the measured data, the method proposed here enhances the solution obtained from a linear solver.Applying a linear reconstruction algorithm before applying an ANN reduces the effects of noise and modeling errors. Hence, this approach significantly reduces the error associated with solving 2D inverse problems using machine-learning-based algorithms.This work presents radical enhancements in the stability of nonlinear methods for biomedical EIT applications.

  16. QR-decomposition based SENSE reconstruction using parallel architecture.

    Science.gov (United States)

    Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad

    2018-04-01

    Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Novel effects of demand side management data on accuracy of electrical energy consumption modeling and long-term forecasting

    International Nuclear Information System (INIS)

    Ardakani, F.J.; Ardehali, M.M.

    2014-01-01

    Highlights: • Novel effects of DSM data on electricity consumption forecasting is examined. • Optimal ANN models based on IPSO and SFL algorithms are developed. • Addition of DSM data to socio-economic indicators data reduces MAPE by 36%. - Abstract: Worldwide implementation of demand side management (DSM) programs has had positive impacts on electrical energy consumption (EEC) and the examination of their effects on long-term forecasting is warranted. The objective of this study is to investigate the effects of historical DSM data on accuracy of EEC modeling and long-term forecasting. To achieve the objective, optimal artificial neural network (ANN) models based on improved particle swarm optimization (IPSO) and shuffled frog-leaping (SFL) algorithms are developed for EEC forecasting. For long-term EEC modeling and forecasting for the U.S. for 2010–2030, two historical data types used in conjunction with developed models include (i) EEC and (ii) socio-economic indicators, namely, gross domestic product, energy imports, energy exports, and population for 1967–2009 period. Simulation results from IPSO-ANN and SFL-ANN models show that using socio-economic indicators as input data achieves lower mean absolute percentage error (MAPE) for long-term EEC forecasting, as compared with EEC data. Based on IPSO-ANN, it is found that, for the U.S. EEC long-term forecasting, the addition of DSM data to socio-economic indicators data reduces MAPE by 36% and results in the estimated difference of 3592.8 MBOE (5849.9 TW h) in EEC for 2010–2030

  18. Magneto-acousto-electrical Measurement Based Electrical Conductivity Reconstruction for Tissues.

    Science.gov (United States)

    Zhou, Yan; Ma, Qingyu; Guo, Gepu; Tu, Juan; Zhang, Dong

    2018-05-01

    Based on the interaction of ultrasonic excitation and magnetoelectrical induction, magneto-acousto-electrical (MAE) technology was demonstrated to have the capability of differentiating conductivity variations along the acoustic transmission. By applying the characteristics of the MAE voltage, a simplified algorithm of MAE measurement based conductivity reconstruction was developed. With the analyses of acoustic vibration, ultrasound propagation, Hall effect, and magnetoelectrical induction, theoretical and experimental studies of MAE measurement and conductivity reconstruction were performed. The formula of MAE voltage was derived and simplified for the transducer with strong directivity. MAE voltage was simulated for a three-layer gel phantom and the conductivity distribution was reconstructed using the modified Wiener inverse filter and Hilbert transform, which was also verified by experimental measurements. The experimental results are basically consistent with the simulations, and demonstrate that the wave packets of MAE voltage are generated at tissue interfaces with the amplitudes and vibration polarities representing the values and directions of conductivity variations. With the proposed algorithm, the amplitude and polarity of conductivity gradient can be restored and the conductivity distribution can also be reconstructed accurately. The favorable results demonstrate the feasibility of accurate conductivity reconstruction with improved spatial resolution using MAE measurement for tissues with conductivity variations, especially suitable for nondispersive tissues with abrupt conductivity changes. This study demonstrates that the MAE measurement based conductivity reconstruction algorithm can be applied as a new strategy for nondestructive real-time monitoring of conductivity variations in biomedical engineering.

  19. Transient stability enhancement of wind farms connected to a multi-machine power system by using an adaptive ANN-controlled SMES

    International Nuclear Information System (INIS)

    Muyeen, S.M.; Hasanien, Hany M.; Al-Durra, Ahmed

    2014-01-01

    Highlights: • We present an ANN-controlled SMES in this paper. • The objective is to enhance transient stability of WF connected to power system. • The control strategy depends on a PWM VSC and DC–DC converter. • The effectiveness of proposed controller is compared with PI controller. • The validity of the proposed system is verified by simulation results. - Abstract: This paper presents a novel adaptive artificial neural network (ANN)-controlled superconducting magnetic energy storage (SMES) system to enhance the transient stability of wind farms connected to a multi-machine power system during network disturbances. The control strategy of SMES depends mainly on a sinusoidal pulse width modulation (PWM) voltage source converter (VSC) and an adaptive ANN-controlled DC–DC converter using insulated gate bipolar transistors (IGBTs). The effectiveness of the proposed adaptive ANN-controlled SMES is then compared with that of proportional-integral (PI)-controlled SMES optimized by response surface methodology and genetic algorithm (RSM–GA) considering both of symmetrical and unsymmetrical faults. For realistic responses, real wind speed data and two-mass drive train model of wind turbine generator system is considered in the analyses. The validity of the proposed system is verified by the simulation results which are performed using the laboratory standard dynamic power system simulator PSCAD/EMTDC. Notably, the proposed adaptive ANN-controlled SMES enhances the transient stability of wind farms connected to a multi-machine power system

  20. Robust, Efficient Depth Reconstruction With Hierarchical Confidence-Based Matching.

    Science.gov (United States)

    Sun, Li; Chen, Ke; Song, Mingli; Tao, Dacheng; Chen, Gang; Chen, Chun

    2017-07-01

    In recent years, taking photos and capturing videos with mobile devices have become increasingly popular. Emerging applications based on the depth reconstruction technique have been developed, such as Google lens blur. However, depth reconstruction is difficult due to occlusions, non-diffuse surfaces, repetitive patterns, and textureless surfaces, and it has become more difficult due to the unstable image quality and uncontrolled scene condition in the mobile setting. In this paper, we present a novel hierarchical framework with multi-view confidence-based matching for robust, efficient depth reconstruction in uncontrolled scenes. Particularly, the proposed framework combines local cost aggregation with global cost optimization in a complementary manner that increases efficiency and accuracy. A depth map is efficiently obtained in a coarse-to-fine manner by using an image pyramid. Moreover, confidence maps are computed to robustly fuse multi-view matching cues, and to constrain the stereo matching on a finer scale. The proposed framework has been evaluated with challenging indoor and outdoor scenes, and has achieved robust and efficient depth reconstruction.

  1. Measurement and ANN prediction of pH-dependent solubility of nitrogen-heterocyclic compounds.

    Science.gov (United States)

    Sun, Feifei; Yu, Qingni; Zhu, Jingke; Lei, Lecheng; Li, Zhongjian; Zhang, Xingwang

    2015-09-01

    Based on the solubility of 25 nitrogen-heterocyclic compounds (NHCs) measured by saturation shake-flask method, artificial neural network (ANN) was employed to the study of the quantitative relationship between the structure and pH-dependent solubility of NHCs. With genetic algorithm-multivariate linear regression (GA-MLR) approach, five out of the 1497 molecular descriptors computed by Dragon software were selected to describe the molecular structures of NHCs. Using the five selected molecular descriptors as well as pH and the partial charge on the nitrogen atom of NHCs (QN) as inputs of ANN, a quantitative structure-property relationship (QSPR) model without using Henderson-Hasselbalch (HH) equation was successfully developed to predict the aqueous solubility of NHCs in different pH water solutions. The prediction model performed well on the 25 model NHCs with an absolute average relative deviation (AARD) of 5.9%, while HH approach gave an AARD of 36.9% for the same model NHCs. It was found that QN played a very important role in the description of NHCs and, with QN, ANN became a potential tool for the prediction of pH-dependent solubility of NHCs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Constraints on reconstructed dark energy model from SN Ia and BAO/CMB observations

    Energy Technology Data Exchange (ETDEWEB)

    Mamon, Abdulla Al [Manipal University, Manipal Centre for Natural Sciences, Manipal (India); Visva-Bharati, Department of Physics, Santiniketan (India); Bamba, Kazuharu [Fukushima University, Division of Human Support System, Faculty of Symbiotic Systems Science, Fukushima (Japan); Das, Sudipta [Visva-Bharati, Department of Physics, Santiniketan (India)

    2017-01-15

    The motivation of the present work is to reconstruct a dark energy model through the dimensionless dark energy function X(z), which is the dark energy density in units of its present value. In this paper, we have shown that a scalar field φ having a phenomenologically chosen X(z) can give rise to a transition from a decelerated to an accelerated phase of expansion for the universe. We have examined the possibility of constraining various cosmological parameters (such as the deceleration parameter and the effective equation of state parameter) by comparing our theoretical model with the latest Type Ia Supernova (SN Ia), Baryon Acoustic Oscillations (BAO) and Cosmic Microwave Background (CMB) radiation observations. Using the joint analysis of the SN Ia+BAO/CMB dataset, we have also reconstructed the scalar potential from the parametrized X(z). The relevant potential is found, a polynomial in φ. From our analysis, it has been found that the present model favors the standard ΛCDM model within 1σ confidence level. (orig.)

  3. ALICE EMCal Reconstructable Energy Non-Linearity From Test Beam Monte Carlo

    CERN Document Server

    Carter, Thomas Michael

    2017-01-01

    Calorimeters play many important roles in modern high energy physics detectors, such as event selection, triggering, and precision energy measurements. EMCal, in the case of the ALICE experiment provides triggering on high energy jets, improves jet quenching study measurement bias and jet energy resolution, and improves electron and photon measurements [3]. With the EMCal detector in the ALICE experiment taking on so many important roles, it is important to fully understand, characterize and model its interactions with particles. In 2010 SPS and PS electron test beam measurements were performed on an EMCal mini-module [2]. Alongside this, the test beam setup and geometry was recreated in Geant4 by Nico [1]. Figure 1 shows the reconstructable energy linearity for the SPS test beam data and that obtained from the test beam monte carlo, indicating the amount of energy deposit as hits in the EMCal module. It can be seen that for energies above ∼ 100 GeV there is a significant drop in the reconstructableenergym...

  4. Parallelization of an existing high energy physics event reconstruction software package

    International Nuclear Information System (INIS)

    Schiefer, R.; Francis, D.

    1996-01-01

    Software parallelization allows an efficient use of available computing power to increase the performance of applications. In a case study the authors have investigated the parallelization of high energy physics event reconstruction software in terms of costs (effort, computing resource requirements), benefits (performance increase) and the feasibility of a systematic parallelization approach. Guidelines facilitating a parallel implementation are proposed for future software development

  5. Identification of drought in Dhalai river watershed using MCDM and ANN models

    Science.gov (United States)

    Aher, Sainath; Shinde, Sambhaji; Guha, Shantamoy; Majumder, Mrinmoy

    2017-03-01

    An innovative approach for drought identification is developed using Multi-Criteria Decision Making (MCDM) and Artificial Neural Network (ANN) models from surveyed drought parameter data around the Dhalai river watershed in Tripura hinterlands, India. Total eight drought parameters, i.e., precipitation, soil moisture, evapotranspiration, vegetation canopy, cropping pattern, temperature, cultivated land, and groundwater level were obtained from expert, literature and cultivator survey. Then, the Analytic Hierarchy Process (AHP) and Analytic Network Process (ANP) were used for weighting of parameters and Drought Index Identification (DII). Field data of weighted parameters in the meso scale Dhalai River watershed were collected and used to train the ANN model. The developed ANN model was used in the same watershed for identification of drought. Results indicate that the Limited-Memory Quasi-Newton algorithm was better than the commonly used training method. Results obtained from the ANN model shows the drought index developed from the study area ranges from 0.32 to 0.72. Overall analysis revealed that, with appropriate training, the ANN model can be used in the areas where the model is calibrated, or other areas where the range of input parameters is similar to the calibrated region for drought identification.

  6. Simulation study of two-energy X-ray fluorescence holograms reconstruction algorithm to remove twin images

    International Nuclear Information System (INIS)

    Xie Honglan; Hu Wen; Luo Hongxin; Deng Biao; Du Guohao; Xue Yanling; Chen Rongchang; Shi Shaomeng; Xiao Tiqiao

    2008-01-01

    Unlike traditional outside-source holography, X-ray fluorescence holography is carded out with fluorescent atoms in a sample as source light for holographic imaging. With the method, three-dimensional arrangement of atoms into crystals can be observed obviously. However, just like traditional outside-source holography, X-ray fluorescence holography suffers from the inherent twin-image problem, too. With a 27-Fe-atoms cubic lattice as model, we discuss in this paper influence of the photon energy of incident source in removing twin images in reconstructed atomic images by numerical simulation and reconstruction with two-energy X-ray fluorescence holography. The results indicate that incident X-rays of nearer energies have better effect of removing twin images. In the detector of X-ray holography, minimum difference of the two incident energies depends on energy resolution of the monochromator and detector, and for inside source X-ray holography, minimum difference of the two incident energies depends on difference of two neighboring fluorescent energies emitting from the element and energy resolution of detector. The spatial resolution of atomic images increases with the incident energies. This is important for experiments of X-ray fluorescence holography, which is being developed on Shanghai Synchrotron Radiation Facility. (authors)

  7. LHCb; LHCb Jet Reconstruction

    CERN Multimedia

    Augusto, O

    2012-01-01

    The Large Hadron Collider (LHC) is the most powerful particle accelerator in the world. It has been designed to collide proton beams at an energy up to 14 TeV in the center of mass. In 2011, the data taking was done with a center of mass energy of 7 TeV, the instant luminosity has reached values greater than $4 \\times 10^{32} cm^{-2} s^{-1}$ and the integrated luminosity reached the value of 1.02 $fb^{-1}$ on the LHCb. The jet reconstruction is fundamental to observe events that can be used to test pertubative QCD (pQCD). It also provides a way to observe standard model channels and searches for new physics like SUSY. The anti-kt algorithm is a jet reconstruction algorithm that is based on the distance of the particles on the space $\\eta \\times \\phi$ and on the transverse momentum of particles. To maximize the energy resolution all information about the trackers and the calo...

  8. On-line event reconstruction using a parallel in-memory data base

    OpenAIRE

    Argante, E; Van der Stok, P D V; Willers, Ian Malcolm

    1995-01-01

    PORS is a system designed for on-line event reconstruction in high energy physics (HEP) experiments. It uses the CPREAD reconstruction program. Central to the system is a parallel in-memory database which is used as communication medium between parallel workers. A farming control structure is implemented with PORS in a natural way. The database provides structured storage of data with a short life time. PORS serves as a case study for the construction of a methodology on how to apply parallel...

  9. Reconstruction of limited-angle dual-energy CT using mutual learning and cross-estimation (MLCE)

    Science.gov (United States)

    Zhang, Huayu; Xing, Yuxiang

    2016-03-01

    Dual-energy CT (DECT) imaging has gained a lot of attenuation because of its capability to discriminate materials. We proposes a flexible DECT scan strategy which can be realized on a system with general X-ray sources and detectors. In order to lower dose and scanning time, our DECT acquires two projections data sets on two arcs of limited-angular coverage (one for each energy) respectively. Meanwhile, a certain number of rays from two data sets form conjugate sampling pairs. Our reconstruction method for such a DECT scan mainly tackles the consequent limited-angle problem. Using the idea of artificial neural network, we excavate the connection between projections at two different energies by constructing a relationship between the linear attenuation coefficient of the high energy and that of the low one. We use this relationship to cross-estimate missing projections and reconstruct attenuation images from an augmented data set including projections at views covered by itself (projections collected in scanning) and by the other energy (projections estimated) for each energy respectively. Validated by our numerical experiment on a dental phantom with rather complex structures, our DECT is effective in recovering small structures in severe limited-angle situations. This DECT scanning strategy can much broaden DECT design in reality.

  10. Tau lepton reconstruction with energy flow and the search for R-parity violating supersymmetry at the ATLAS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Fleischmann, Sebastian

    2012-10-15

    This thesis investigates the discovery potential of the ATLAS experiment at the Large Hadron Collider (LHC) for R-parity violating (RPV) supersymmetric (SUSY) models in the framework of mSUGRA, where the stau ({tau}) is the lightest supersymmetric particle (LSP). Hence, the LSP is charged and decays in contrast to R-parity conserving models. For the first time in the framework of this RPV model a detailed signal to background analysis is performed for a specific benchmark scenario using a full Monte Carlo simulation of the ATLAS detector. Furthermore a feasibility study for an estimate of the stau LSP mass is given. The fast track simulation FATRAS is a new approach for the Monte Carlo simulation of particles in the tracking systems of the ATLAS experiment. Its results are compared to first data at {radical}(s) = 900 GeV. Additionally, two generic detector simulations are compared to the full simulation. The reconstruction of tau leptons is crucial for many searches for new physics with ATLAS. Therefore, the reconstruction of tracks for particles from tau decays is studied. A novel method, PanTau, is presented for the tau reconstruction in ATLAS. It is based on the energy flow algorithm eflowRec. Its performance is evaluated in Monte Carlo simulations. The dependency of the identification variables on the jet energy are studied in detail. Finally, the energy flow quantities and the identification variables are compared between Monte Carlo simulations and measured multijet events with first ATLAS data at {radical}(s) = 7 TeV.

  11. Tau lepton reconstruction with energy flow and the search for R-parity violating supersymmetry at the ATLAS experiment

    International Nuclear Information System (INIS)

    Fleischmann, Sebastian

    2012-10-01

    This thesis investigates the discovery potential of the ATLAS experiment at the Large Hadron Collider (LHC) for R-parity violating (RPV) supersymmetric (SUSY) models in the framework of mSUGRA, where the stau (τ) is the lightest supersymmetric particle (LSP). Hence, the LSP is charged and decays in contrast to R-parity conserving models. For the first time in the framework of this RPV model a detailed signal to background analysis is performed for a specific benchmark scenario using a full Monte Carlo simulation of the ATLAS detector. Furthermore a feasibility study for an estimate of the stau LSP mass is given. The fast track simulation FATRAS is a new approach for the Monte Carlo simulation of particles in the tracking systems of the ATLAS experiment. Its results are compared to first data at √(s) = 900 GeV. Additionally, two generic detector simulations are compared to the full simulation. The reconstruction of tau leptons is crucial for many searches for new physics with ATLAS. Therefore, the reconstruction of tracks for particles from tau decays is studied. A novel method, PanTau, is presented for the tau reconstruction in ATLAS. It is based on the energy flow algorithm eflowRec. Its performance is evaluated in Monte Carlo simulations. The dependency of the identification variables on the jet energy are studied in detail. Finally, the energy flow quantities and the identification variables are compared between Monte Carlo simulations and measured multijet events with first ATLAS data at √(s) = 7 TeV.

  12. Error Propagation dynamics: from PIV-based pressure reconstruction to vorticity field calculation

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Richards, Geordie; Truscott, Tadd; USU Team; BYU Team

    2017-11-01

    Noninvasive data from velocimetry experiments (e.g., PIV) have been used to calculate vorticity and pressure fields. However, the noise, error, or uncertainties in the PIV measurements would eventually propagate to the calculated pressure or vorticity field through reconstruction schemes. Despite the vast applications of pressure and/or vorticity field calculated from PIV measurements, studies on the error propagation from the velocity field to the reconstructed fields (PIV-pressure and PIV-vorticity are few. In the current study, we break down the inherent connections between PIV-based pressure reconstruction and PIV-based vorticity calculation. The similar error propagation dynamics, which involve competition between physical properties of the flow and numerical errors from reconstruction schemes, are found in both PIV-pressure and PIV-vorticity reconstructions.

  13. Comparison of the accuracy of SST estimates by artificial neural networks (ANN) and other quantitative methods using radiolarian data from the Antarctic and Pacific Oceans

    Digital Repository Service at National Institute of Oceanography (India)

    Gupta, S.M.; Malmgren, B.A.

    ) regression, the maximum likelihood (ML) method, and artificial neural networks (ANNs), based on radiolarian faunal abundance data from surface sediments from the Antarctic and Pacific Oceans. Recent studies have suggested that ANNs may represent one...

  14. Ann Arbor Session I: Breaking Ground.

    Science.gov (United States)

    Music Educators Journal, 1979

    1979-01-01

    Summarizes the first session of the National Symposium on the Applications of Psychology to the Teaching and Learning of Music held at Ann Arbor from October 30 to November 2, 1978. Sessions concerned auditory perception, motor learning, child development, memory and information processing, and affect and motivation. (SJL)

  15. Optimization-based reconstruction for reduction of CBCT artifact in IGRT

    Science.gov (United States)

    Xia, Dan; Zhang, Zheng; Paysan, Pascal; Seghers, Dieter; Brehm, Marcus; Munro, Peter; Sidky, Emil Y.; Pelizzari, Charles; Pan, Xiaochuan

    2016-04-01

    Kilo-voltage cone-beam computed tomography (CBCT) plays an important role in image guided radiation therapy (IGRT) by providing 3D spatial information of tumor potentially useful for optimizing treatment planning. In current IGRT CBCT system, reconstructed images obtained with analytic algorithms, such as FDK algorithm and its variants, may contain artifacts. In an attempt to compensate for the artifacts, we investigate optimization-based reconstruction algorithms such as the ASD-POCS algorithm for potentially reducing arti- facts in IGRT CBCT images. In this study, using data acquired with a physical phantom and a patient subject, we demonstrate that the ASD-POCS reconstruction can significantly reduce artifacts observed in clinical re- constructions. Moreover, patient images reconstructed by use of the ASD-POCS algorithm indicate a contrast level of soft-tissue improved over that of the clinical reconstruction. We have also performed reconstructions from sparse-view data, and observe that, for current clinical imaging conditions, ASD-POCS reconstructions from data collected at one half of the current clinical projection views appear to show image quality, in terms of spatial and soft-tissue-contrast resolution, higher than that of the corresponding clinical reconstructions.

  16. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    International Nuclear Information System (INIS)

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-01-01

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  17. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Science.gov (United States)

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  18. Proof and implementation of the stochastic formula for ideal gas, energy dependent scattering kernel

    International Nuclear Information System (INIS)

    Becker, B.; Dagan, R.; Lohnert, G.

    2009-01-01

    The ideal gas, scattering kernel for heavy nuclei with pronounced resonances was developed [Rothenstein, W., Dagan, R., 1998. Ann. Nucl. Energy 25, 209-222], proved and implemented [Rothenstein, W., 2004 Ann. Nucl. Energy 31, 9-23] in the data processing code NJOY [Macfarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91, LA-12740-M] from which the scattering probability tables were prepared [Dagan, R., 2005. Ann. Nucl. Energy 32, 367-377]. Those tables were introduced to the well known MCNP code [X-5 Monte Carlo Team. MCNP - A General Monte Carlo N-Particle Transport Code version 5 LA-UR-03-1987 code] via the 'mt' input cards in the same manner as it is done for light nuclei in the thermal energy range. In this study we present an alternative methodology for solving the double differential energy dependent scattering kernel which is based solely on stochastic consideration as far as the scattering probabilities are concerned. The solution scheme is based on an alternative rejection scheme suggested by Rothenstein [Rothenstein, W. ENS conference 1994 Tel Aviv]. Based on comparison with the above mentioned analytical (probability S(α,β)-tables) approach it is confirmed that the suggested rejection scheme provides accurate results. The uncertainty concerning the magnitude of the bias due to the enhanced multiple rejections during the sampling procedure are proved to lie within 1-2 standard deviations for all practical cases that were analysed.

  19. Cone-beam local reconstruction based on a Radon inversion transformation

    International Nuclear Information System (INIS)

    Wang Xian-Chao; Yan Bin; Li Lei; Hu Guo-En

    2012-01-01

    The local reconstruction from truncated projection data is one area of interest in image reconstruction for computed tomography (CT), which creates the possibility for dose reduction. In this paper, a filtered-backprojection (FBP) algorithm based on the Radon inversion transform is presented to deal with the three-dimensional (3D) local reconstruction in the circular geometry. The algorithm achieves the data filtering in two steps. The first step is the derivative of projections, which acts locally on the data and can thus be carried out accurately even in the presence of data truncation. The second step is the nonlocal Hilbert filtering. The numerical simulations and the real data reconstructions have been conducted to validate the new reconstruction algorithm. Compared with the approximate truncation resistant algorithm for computed tomography (ATRACT), not only it has a comparable ability to restrain truncation artifacts, but also its reconstruction efficiency is improved. It is about twice as fast as that of the ATRACT. Therefore, this work provides a simple and efficient approach for the approximate reconstruction from truncated projections in the circular cone-beam CT

  20. Automatic system for 3D reconstruction of the chick eye based on digital photographs.

    Science.gov (United States)

    Wong, Alexander; Genest, Reno; Chandrashekar, Naveen; Choh, Vivian; Irving, Elizabeth L

    2012-01-01

    The geometry of anatomical specimens is very complex and accurate 3D reconstruction is important for morphological studies, finite element analysis (FEA) and rapid prototyping. Although magnetic resonance imaging, computed tomography and laser scanners can be used for reconstructing biological structures, the cost of the equipment is fairly high and specialised technicians are required to operate the equipment, making such approaches limiting in terms of accessibility. In this paper, a novel automatic system for 3D surface reconstruction of the chick eye from digital photographs of a serially sectioned specimen is presented as a potential cost-effective and practical alternative. The system is designed to allow for automatic detection of the external surface of the chick eye. Automatic alignment of the photographs is performed using a combination of coloured markers and an algorithm based on complex phase order likelihood that is robust to noise and illumination variations. Automatic segmentation of the external boundaries of the eye from the aligned photographs is performed using a novel level-set segmentation approach based on a complex phase order energy functional. The extracted boundaries are sampled to construct a 3D point cloud, and a combination of Delaunay triangulation and subdivision surfaces is employed to construct the final triangular mesh. Experimental results using digital photographs of the chick eye show that the proposed system is capable of producing accurate 3D reconstructions of the external surface of the eye. The 3D model geometry is similar to a real chick eye and could be used for morphological studies and FEA.

  1. Jet reconstruction at high-energy electron-positron colliders

    Energy Technology Data Exchange (ETDEWEB)

    Boronat, M.; Fuster, J.; Garcia, I.; Vos, M. [IFIC (CSIC/UVEG), Valencia (Spain); Roloff, P.; Simoniello, R. [CERN, Geneva (Switzerland)

    2018-02-15

    In this paper we study the performance in e{sup +}e{sup -} collisions of classical e{sup +}e{sup -} jet reconstruction algorithms, longitudinally invariant algorithms and the recently proposed Valencia algorithm. The study includes a comparison of perturbative and non-perturbative jet energy corrections and the response under realistic background conditions. Several algorithms are benchmarked with a detailed detector simulation at √(s) = 3 TeV. We find that the classical e{sup +}e{sup -} algorithms, with or without beam jets, have the best response, but they are inadequate in environments with non-negligible background. The Valencia algorithm and longitudinally invariant k{sub t} algorithms have a much more robust performance, with a slight advantage for the former. (orig.)

  2. Reconstruction of chaotic signals with applications to chaos-based communications

    CERN Document Server

    Feng, Jiu Chao

    2008-01-01

    This book provides a systematic review of the fundamental theory of signal reconstruction and the practical techniques used in reconstructing chaotic signals. Specific applications of signal reconstruction methods in chaos-based communications are expounded in full detail, along with examples illustrating the various problems associated with such applications.The book serves as an advanced textbook for undergraduate and graduate courses in electronic and information engineering, automatic control, physics and applied mathematics. It is also highly suited for general nonlinear scientists who wi

  3. Inference-Based Surface Reconstruction of Cluttered Environments

    KAUST Repository

    Biggers, K.

    2012-08-01

    We present an inference-based surface reconstruction algorithm that is capable of identifying objects of interest among a cluttered scene, and reconstructing solid model representations even in the presence of occluded surfaces. Our proposed approach incorporates a predictive modeling framework that uses a set of user-provided models for prior knowledge, and applies this knowledge to the iterative identification and construction process. Our approach uses a local to global construction process guided by rules for fitting high-quality surface patches obtained from these prior models. We demonstrate the application of this algorithm on several example data sets containing heavy clutter and occlusion. © 2012 IEEE.

  4. Registration-based Reconstruction of Four-dimensional Cone Beam Computed Tomography

    DEFF Research Database (Denmark)

    Christoffersen, Christian; Hansen, David Christoffer; Poulsen, Per Rugaard

    2013-01-01

    We present a new method for reconstruction of four-dimensional (4D) cone beam computed tomography from an undersampled set of X-ray projections. The novelty of the proposed method lies in utilizing optical flow based registration to facilitate that each temporal phase is reconstructed from the full...

  5. A Hybrid FEM-ANN Approach for Slope Instability Prediction

    Science.gov (United States)

    Verma, A. K.; Singh, T. N.; Chauhan, Nikhil Kumar; Sarkar, K.

    2016-09-01

    Assessment of slope stability is one of the most critical aspects for the life of a slope. In any slope vulnerability appraisal, Factor Of Safety (FOS) is the widely accepted index to understand, how close or far a slope from the failure. In this work, an attempt has been made to simulate a road cut slope in a landslide prone area in Rudrapryag, Uttarakhand, India which lies near Himalayan geodynamic mountain belt. A combination of Finite Element Method (FEM) and Artificial Neural Network (ANN) has been adopted to predict FOS of the slope. In ANN, a three layer, feed- forward back-propagation neural network with one input layer and one hidden layer with three neurons and one output layer has been considered and trained using datasets generated from numerical analysis of the slope and validated with new set of field slope data. Mean absolute percentage error estimated as 1.04 with coefficient of correlation between the FOS of FEM and ANN as 0.973, which indicates that the system is very vigorous and fast to predict FOS for any slope.

  6. Practical considerations for image-based PSF and blobs reconstruction in PET

    International Nuclear Information System (INIS)

    Stute, Simon; Comtat, Claude

    2013-01-01

    Iterative reconstructions in positron emission tomography (PET) need a model relating the recorded data to the object/patient being imaged, called the system matrix (SM). The more realistic this model, the better the spatial resolution in the reconstructed images. However, a serious concern when using a SM that accurately models the resolution properties of the PET system is the undesirable edge artefact, visible through oscillations near sharp discontinuities in the reconstructed images. This artefact is a natural consequence of solving an ill-conditioned inverse problem, where the recorded data are band-limited. In this paper, we focus on practical aspects when considering image-based point-spread function (PSF) reconstructions. To remove the edge artefact, we propose to use a particular case of the method of sieves (Grenander 1981 Abstract Inference New York: Wiley), which simply consists in performing a standard PSF reconstruction, followed by a post-smoothing using the PSF as the convolution kernel. Using analytical simulations, we investigate the impact of different reconstruction and PSF modelling parameters on the edge artefact and its suppression, in the case of noise-free data and an exactly known PSF. Using Monte-Carlo simulations, we assess the proposed method of sieves with respect to the choice of the geometric projector and the PSF model used in the reconstruction. When the PSF model is accurately known, we show that the proposed method of sieves succeeds in completely suppressing the edge artefact, though after a number of iterations higher than typically used in practice. When applying the method to realistic data (i.e. unknown true SM and noisy data), we show that the choice of the geometric projector and the PSF model does not impact the results in terms of noise and contrast recovery, as long as the PSF has a width close to the true PSF one. Equivalent results were obtained using either blobs or voxels in the same conditions (i.e. the blob

  7. Direct Reconstruction of CT-based Attenuation Correction Images for PET with Cluster-Based Penalties

    Science.gov (United States)

    Kim, Soo Mee; Alessio, Adam M.; De Man, Bruno; Asma, Evren; Kinahan, Paul E.

    2015-01-01

    Extremely low-dose CT acquisitions for the purpose of PET attenuation correction will have a high level of noise and biasing artifacts due to factors such as photon starvation. This work explores a priori knowledge appropriate for CT iterative image reconstruction for PET attenuation correction. We investigate the maximum a posteriori (MAP) framework with cluster-based, multinomial priors for the direct reconstruction of the PET attenuation map. The objective function for direct iterative attenuation map reconstruction was modeled as a Poisson log-likelihood with prior terms consisting of quadratic (Q) and mixture (M) distributions. The attenuation map is assumed to have values in 4 clusters: air+background, lung, soft tissue, and bone. Under this assumption, the MP was a mixture probability density function consisting of one exponential and three Gaussian distributions. The relative proportion of each cluster was jointly estimated during each voxel update of direct iterative coordinate decent (dICD) method. Noise-free data were generated from NCAT phantom and Poisson noise was added. Reconstruction with FBP (ramp filter) was performed on the noise-free (ground truth) and noisy data. For the noisy data, dICD reconstruction was performed with the combination of different prior strength parameters (β and γ) of Q- and M-penalties. The combined quadratic and mixture penalties reduces the RMSE by 18.7% compared to post-smoothed iterative reconstruction and only 0.7% compared to quadratic alone. For direct PET attenuation map reconstruction from ultra-low dose CT acquisitions, the combination of quadratic and mixture priors offers regularization of both variance and bias and is a potential method to derive attenuation maps with negligible patient dose. However, the small improvement in quantitative accuracy relative to the substantial increase in algorithm complexity does not currently justify the use of mixture-based PET attenuation priors for reconstruction of CT

  8. Simulating the energy deposits of particles in the KASCADE-grande detector stations as a preliminary step for EAS event reconstruction

    International Nuclear Information System (INIS)

    Toma, G.; Brancus, I.M.; Mitrica, B.; Sima, O.; Rebel, H.; Haungs, A.

    2005-01-01

    The study of primary cosmic rays with energies higher than 10 14 eV is done mostly by indirect observation techniques such as the study of Extensive Air Showers (EAS). In the much larger framework effort of inferring data on the mass and energy of the primaries from EAS observables, the present study aims at developing a versatile method and software tool that will be used to reconstruct lateral particle densities from the energy deposits of particles in the KASCADE-Grande detector stations. The study has been performed on simulated events, by taking into account the interaction of the EAS components with the detector array (energy deposits). The energy deposits have been simulated using the GEANT code and then the energy deposits have been parametrized for different incident energies and angles of EAS particles. Thus the results obtained for simulated events have the same level of consistency as the experimental data. This technique will allow an increased speed of lateral particle density reconstruction when studying real events detected by the KASCADE-Grande array. The particle densities in detectors have been reconstructed from the energy deposits. A correlation between lateral particle density and primary mass and primary energy (at ∼600 m from shower core) has been established. The study puts great emphasis on the quality of reconstruction and also on the speed of the technique. The data obtained from the study on simulated events creates the basis for the next stage of the study, the study of real events detected by the KASCADE-Grande array. (authors)

  9. Comparative performance analysis of the artificial-intelligence-based thermal control algorithms for the double-skin building

    International Nuclear Information System (INIS)

    Moon, Jin Woo

    2015-01-01

    This study aimed at developing artificial-intelligence-(AI)-theory-based optimal control algorithms for improving the indoor temperature conditions and heating energy efficiency of the double-skin buildings. For this, one conventional rule-based and four AI-based algorithms were developed, including artificial neural network (ANN), fuzzy logic (FL), and adaptive neuro fuzzy inference systems (ANFIS), for operating the surface openings of the double skin and the heating system. A numerical computer simulation method incorporating the matrix laboratory (MATLAB) and the transient systems simulation (TRNSYS) software was used for the comparative performance tests. The analysis results revealed that advanced thermal-environment comfort and stability can be provided by the AI-based algorithms. In particular, the FL and ANFIS algorithms were superior to the ANN algorithm in terms of providing better thermal conditions. The ANN-based algorithm, however, proved its potential to be the most energy-efficient and stable strategy among the four AI-based algorithms. It can be concluded that the optimal algorithm can be differently determined according to the major focus of the strategy. If comfortable thermal condition is the principal interest, then the FL or ANFIS algorithm could be the proper solution, and if energy saving for space heating and system operation stability is the main concerns, then the ANN-based algorithm may be applicable. - Highlights: • Integrated control algorithms were developed for the heating system and surface openings. • AI theories were applied to the control algorithms. • ANN, FL, and ANFIS were the applied AI theories. • Comparative performance tests were conducted using computer simulation. • AI algorithms presented superior temperature environment.

  10. Modeling Multi-Event Non-Point Source Pollution in a Data-Scarce Catchment Using ANN and Entropy Analysis

    Directory of Open Access Journals (Sweden)

    Lei Chen

    2017-06-01

    Full Text Available Event-based runoff–pollutant relationships have been the key for water quality management, but the scarcity of measured data results in poor model performance, especially for multiple rainfall events. In this study, a new framework was proposed for event-based non-point source (NPS prediction and evaluation. The artificial neural network (ANN was used to extend the runoff–pollutant relationship from complete data events to other data-scarce events. The interpolation method was then used to solve the problem of tail deviation in the simulated pollutographs. In addition, the entropy method was utilized to train the ANN for comprehensive evaluations. A case study was performed in the Three Gorges Reservoir Region, China. Results showed that the ANN performed well in the NPS simulation, especially for light rainfall events, and the phosphorus predictions were always more accurate than the nitrogen predictions under scarce data conditions. In addition, peak pollutant data scarcity had a significant impact on the model performance. Furthermore, these traditional indicators would lead to certain information loss during the model evaluation, but the entropy weighting method could provide a more accurate model evaluation. These results would be valuable for monitoring schemes and the quantitation of event-based NPS pollution, especially in data-poor catchments.

  11. Position reconstruction in LUX

    Science.gov (United States)

    Akerib, D. S.; Alsum, S.; Araújo, H. M.; Bai, X.; Bailey, A. J.; Balajthy, J.; Beltrame, P.; Bernard, E. P.; Bernstein, A.; Biesiadzinski, T. P.; Boulton, E. M.; Brás, P.; Byram, D.; Cahn, S. B.; Carmona-Benitez, M. C.; Chan, C.; Currie, A.; Cutter, J. E.; Davison, T. J. R.; Dobi, A.; Druszkiewicz, E.; Edwards, B. N.; Fallon, S. R.; Fan, A.; Fiorucci, S.; Gaitskell, R. J.; Genovesi, J.; Ghag, C.; Gilchriese, M. G. D.; Hall, C. R.; Hanhardt, M.; Haselschwardt, S. J.; Hertel, S. A.; Hogan, D. P.; Horn, M.; Huang, D. Q.; Ignarra, C. M.; Jacobsen, R. G.; Ji, W.; Kamdin, K.; Kazkaz, K.; Khaitan, D.; Knoche, R.; Larsen, N. A.; Lenardo, B. G.; Lesko, K. T.; Lindote, A.; Lopes, M. I.; Manalaysay, A.; Mannino, R. L.; Marzioni, M. F.; McKinsey, D. N.; Mei, D.-M.; Mock, J.; Moongweluwan, M.; Morad, J. A.; Murphy, A. St. J.; Nehrkorn, C.; Nelson, H. N.; Neves, F.; O'Sullivan, K.; Oliver-Mallory, K. C.; Palladino, K. J.; Pease, E. K.; Rhyne, C.; Shaw, S.; Shutt, T. A.; Silva, C.; Solmaz, M.; Solovov, V. N.; Sorensen, P.; Sumner, T. J.; Szydagis, M.; Taylor, D. J.; Taylor, W. C.; Tennyson, B. P.; Terman, P. A.; Tiedt, D. R.; To, W. H.; Tripathi, M.; Tvrznikova, L.; Uvarov, S.; Velan, V.; Verbus, J. R.; Webb, R. C.; White, J. T.; Whitis, T. J.; Witherell, M. S.; Wolfs, F. L. H.; Xu, J.; Yazdani, K.; Young, S. K.; Zhang, C.

    2018-02-01

    The (x, y) position reconstruction method used in the analysis of the complete exposure of the Large Underground Xenon (LUX) experiment is presented. The algorithm is based on a statistical test that makes use of an iterative method to recover the photomultiplier tube (PMT) light response directly from the calibration data. The light response functions make use of a two dimensional functional form to account for the photons reflected on the inner walls of the detector. To increase the resolution for small pulses, a photon counting technique was employed to describe the response of the PMTs. The reconstruction was assessed with calibration data including 83mKr (releasing a total energy of 41.5 keV) and 3H (β- with Q = 18.6 keV) decays, and a deuterium-deuterium (D-D) neutron beam (2.45 MeV) . Within the detector's fiducial volume, the reconstruction has achieved an (x, y) position uncertainty of σ = 0.82 cm and σ = 0.17 cm for events of only 200 and 4,000 detected electroluminescence photons respectively. Such signals are associated with electron recoils of energies ~0.25 keV and ~10 keV, respectively. The reconstructed position of the smallest events with a single electron emitted from the liquid surface (22 detected photons) has a horizontal (x, y) uncertainty of 2.13 cm.

  12. Energy Analysis of Decoders for Rakeness-Based Compressed Sensing of ECG Signals.

    Science.gov (United States)

    Pareschi, Fabio; Mangia, Mauro; Bortolotti, Daniele; Bartolini, Andrea; Benini, Luca; Rovatti, Riccardo; Setti, Gianluca

    2017-12-01

    In recent years, compressed sensing (CS) has proved to be effective in lowering the power consumption of sensing nodes in biomedical signal processing devices. This is due to the fact the CS is capable of reducing the amount of data to be transmitted to ensure correct reconstruction of the acquired waveforms. Rakeness-based CS has been introduced to further reduce the amount of transmitted data by exploiting the uneven distribution to the sensed signal energy. Yet, so far no thorough analysis exists on the impact of its adoption on CS decoder performance. The latter point is of great importance, since body-area sensor network architectures may include intermediate gateway nodes that receive and reconstruct signals to provide local services before relaying data to a remote server. In this paper, we fill this gap by showing that rakeness-based design also improves reconstruction performance. We quantify these findings in the case of ECG signals and when a variety of reconstruction algorithms are used either in a low-power microcontroller or a heterogeneous mobile computing platform.

  13. Energy measurement and longitudinal beam emittance reconstruction in L4T line

    CERN Document Server

    Meng, C; Garoby, R; Lallement, JB; Lombardi, A; Tang, J Y; Yarmohammadi Satri, M; CERN. Geneva. ATS Department

    2013-01-01

    LINAC4 is a new linear accelerator for H- ion which will replace proton Linac2 as injector for the CERN proton accelerator complex. LINAC4 accelerates H− ions from 45 keV to 160 MeV in a sequence of normal conducting structures. Then, H- ions with a kinetic energy of 160 MeV will be sent to the PS Booster. This note describes two energy measurement methods and a improved method that will be used for longitudinal emittance reconstruction with space charge by multi-particle tracking code and the expected results.

  14. Energy systems and the climate dilemma Reflecting the impact on CO2 emissions by reconstructing regional energy systems

    International Nuclear Information System (INIS)

    Carlson, Annelie

    2003-01-01

    Global warming is one of the most important environmental issues today. One step for the European Union to fulfil the Kyoto protocol, stating a worldwide decrease of emissions of greenhouse gases, is to treat the environment as a scarce resource by attributing costs for environmental impact. This accompanied with considering the European electricity market as one common market, where coal condensing power is the marginal production, lead to the possibility to reduce CO 2 -emissions in Europe by reconstructing energy systems at a local scale in Sweden. A regional energy system model is used to study possibilities to replace electricity and fossil fuel used for heating with biomass and how a reconstruction can affect the emissions of CO 2 . An economic approach is used where cost-effective technical measures are analysed using present conditions and by including monetary values of externalities. The analysis shows that, by acting economically rational, a large amount of electricity and fossil fuel should, in three out of four cases, be replaced leading to a substantial reduction of CO 2 emissions

  15. Annäherung Approaching

    Directory of Open Access Journals (Sweden)

    Carola Hilmes

    2007-03-01

    Full Text Available Das von Stefan Moses zusammengestellte „Bilderbuch“ zeigt Fotos von Ilse Aichinger. Sie selbst kommt durch eine Reihe von Geschichten und Gedichten zu Wort. In diesen intimen Dialog werden auch die Leser/-innen einbezogen. Das ermöglicht Annäherung.This “Picture Book”, compiled by Stefan Moses, displays photographs of Ilse Aichinger. She is also given voice through a series of stories and poems. The reader is also drawn into this intimate dialogue, thus making it possible for image, text, and reader to converge.

  16. Visual NNet: An Educational ANN's Simulation Environment Reusing Matlab Neural Networks Toolbox

    Science.gov (United States)

    Garcia-Roselló, Emilio; González-Dacosta, Jacinto; Lado, Maria J.; Méndez, Arturo J.; Garcia Pérez-Schofield, Baltasar; Ferrer, Fátima

    2011-01-01

    Artificial Neural Networks (ANN's) are nowadays a common subject in different curricula of graduate and postgraduate studies. Due to the complex algorithms involved and the dynamic nature of ANN's, simulation software has been commonly used to teach this subject. This software has usually been developed specifically for learning purposes, because…

  17. A TLBO based gradient descent learning-functional link higher order ANN: An efficient model for learning from non-linear data

    Directory of Open Access Journals (Sweden)

    Bighnaraj Naik

    2018-01-01

    Full Text Available All the higher order ANNs (HONNs including functional link ANN (FLANN are sensitive to random initialization of weight and rely on the learning algorithms adopted. Although a selection of efficient learning algorithms for HONNs helps to improve the performance, on the other hand, initialization of weights with optimized weights rather than random weights also play important roles on its efficiency. In this paper, the problem solving approach of the teaching learning based optimization (TLBO along with learning ability of the gradient descent learning (GDL is used to obtain the optimal set of weight of FLANN learning model. TLBO does not require any specific parameters rather it requires only some of the common independent parameters like number of populations, number of iterations and stopping criteria, thereby eliminating the intricacy in selection of algorithmic parameters for adjusting the set of weights of FLANN model. The proposed TLBO-FLANN is implemented in MATLAB and compared with GA-FLANN, PSO-FLANN and HS-FLANN. The TLBO-FLANN is tested on various 5-fold cross validated benchmark data sets from UCI machine learning repository and analyzed under the null-hypothesis by using Friedman test, Holm’s procedure and post hoc ANOVA statistical analysis (Tukey test & Dunnett test.

  18. Hadron energy reconstruction for the ATLAS calorimetry in the framework of the nonparametrical method

    CERN Document Server

    Akhmadaliev, S Z; Ambrosini, G; Amorim, A; Anderson, K; Andrieux, M L; Aubert, Bernard; Augé, E; Badaud, F; Baisin, L; Barreiro, F; Battistoni, G; Bazan, A; Bazizi, K; Belymam, A; Benchekroun, D; Berglund, S R; Berset, J C; Blanchot, G; Bogush, A A; Bohm, C; Boldea, V; Bonivento, W; Bosman, M; Bouhemaid, N; Breton, D; Brette, P; Bromberg, C; Budagov, Yu A; Burdin, S V; Calôba, L P; Camarena, F; Camin, D V; Canton, B; Caprini, M; Carvalho, J; Casado, M P; Castillo, M V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Chadelas, R; Chalifour, M; Chekhtman, A; Chevalley, J L; Chirikov-Zorin, I E; Chlachidze, G; Citterio, M; Cleland, W E; Clément, C; Cobal, M; Cogswell, F; Colas, Jacques; Collot, J; Cologna, S; Constantinescu, S; Costa, G; Costanzo, D; Crouau, M; Daudon, F; David, J; David, M; Davidek, T; Dawson, J; De, K; de La Taille, C; Del Peso, J; Del Prete, T; de Saintignon, P; Di Girolamo, B; Dinkespiler, B; Dita, S; Dodd, J; Dolejsi, J; Dolezal, Z; Downing, R; Dugne, J J; Dzahini, D; Efthymiopoulos, I; Errede, D; Errede, S; Evans, H; Eynard, G; Fassi, F; Fassnacht, P; Ferrari, A; Ferrer, A; Flaminio, Vincenzo; Fournier, D; Fumagalli, G; Gallas, E; Gaspar, M; Giakoumopoulou, V; Gianotti, F; Gildemeister, O; Giokaris, N; Glagolev, V; Glebov, V Yu; Gomes, A; González, V; González de la Hoz, S; Grabskii, V; Graugès-Pous, E; Grenier, P; Hakopian, H H; Haney, M; Hébrard, C; Henriques, A; Hervás, L; Higón, E; Holmgren, Sven Olof; Hostachy, J Y; Hoummada, A; Huston, J; Imbault, D; Ivanyushenkov, Yu M; Jézéquel, S; Johansson, E K; Jon-And, K; Jones, R; Juste, A; Kakurin, S; Karyukhin, A N; Khokhlov, Yu A; Khubua, J I; Klioukhine, V I; Kolachev, G M; Kopikov, S V; Kostrikov, M E; Kozlov, V; Krivkova, P; Kukhtin, V V; Kulagin, M; Kulchitskii, Yu A; Kuzmin, M V; Labarga, L; Laborie, G; Lacour, D; Laforge, B; Lami, S; Lapin, V; Le Dortz, O; Lefebvre, M; Le Flour, T; Leitner, R; Leltchouk, M; Li, J; Liablin, M V; Linossier, O; Lissauer, D; Lobkowicz, F; Lokajícek, M; Lomakin, Yu F; López-Amengual, J M; Lund-Jensen, B; Maio, A; Makowiecki, D S; Malyukov, S N; Mandelli, L; Mansoulié, B; Mapelli, Livio P; Marin, C P; Marrocchesi, P S; Marroquim, F; Martin, P; Maslennikov, A L; Massol, N; Mataix, L; Mazzanti, M; Mazzoni, E; Merritt, F S; Michel, B; Miller, R; Minashvili, I A; Miralles, L; Mnatzakanian, E A; Monnier, E; Montarou, G; Mornacchi, Giuseppe; Moynot, M; Muanza, G S; Nayman, P; Némécek, S; Nessi, Marzio; Nicoleau, S; Niculescu, M; Noppe, J M; Onofre, A; Pallin, D; Pantea, D; Paoletti, R; Park, I C; Parrour, G; Parsons, J; Pereira, A; Perini, L; Perlas, J A; Perrodo, P; Pilcher, J E; Pinhão, J; Plothow-Besch, Hartmute; Poggioli, Luc; Poirot, S; Price, L; Protopopov, Yu; Proudfoot, J; Puzo, P; Radeka, V; Rahm, David Charles; Reinmuth, G; Renzoni, G; Rescia, S; Resconi, S; Richards, R; Richer, J P; Roda, C; Rodier, S; Roldán, J; Romance, J B; Romanov, V; Romero, P; Rossel, F; Rusakovitch, N A; Sala, P; Sanchis, E; Sanders, H; Santoni, C; Santos, J; Sauvage, D; Sauvage, G; Sawyer, L; Says, L P; Schaffer, A C; Schwemling, P; Schwindling, J; Seguin-Moreau, N; Seidl, W; Seixas, J M; Selldén, B; Seman, M; Semenov, A; Serin, L; Shaldaev, E; Shochet, M J; Sidorov, V; Silva, J; Simaitis, V J; Simion, S; Sissakian, A N; Snopkov, R; Söderqvist, J; Solodkov, A A; Soloviev, A; Soloviev, I V; Sonderegger, P; Soustruznik, K; Spanó, F; Spiwoks, R; Stanek, R; Starchenko, E A; Stavina, P; Stephens, R; Suk, M; Surkov, A; Sykora, I; Takai, H; Tang, F; Tardell, S; Tartarelli, F; Tas, P; Teiger, J; Thaler, J; Thion, J; Tikhonov, Yu A; Tisserant, S; Tokar, S; Topilin, N D; Trka, Z; Turcotte, M; Valkár, S; Varanda, M J; Vartapetian, A H; Vazeille, F; Vichou, I; Vinogradov, V; Vorozhtsov, S B; Vuillemin, V; White, A; Wielers, M; Wingerter-Seez, I; Wolters, H; Yamdagni, N; Yosef, C; Zaitsev, A; Zitoun, R; Zolnierowski, Y

    2002-01-01

    This paper discusses hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter (consisting of a lead-liquid argon electromagnetic part and an iron-scintillator hadronic part) in the framework of the nonparametrical method. The nonparametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to an easy use in a first level trigger. The reconstructed mean values of the hadron energies are within +or-1% of the true values and the fractional energy resolution is [(58+or-3)%/ square root E+(2.5+or-0.3)%](+)(1.7+or-0.2)/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74+or-0.04 and agrees with the prediction that e/h >1.66 for this electromagnetic calorimeter. Results of a study of the longitudinal hadronic shower development are also presented. The data have been taken in the H8 beam...

  19. 3.5D dynamic PET image reconstruction incorporating kinetics-based clusters

    International Nuclear Information System (INIS)

    Lu Lijun; Chen Wufan; Karakatsanis, Nicolas A; Rahmim, Arman; Tang Jing

    2012-01-01

    Standard 3D dynamic positron emission tomographic (PET) imaging consists of independent image reconstructions of individual frames followed by application of appropriate kinetic model to the time activity curves at the voxel or region-of-interest (ROI). The emerging field of 4D PET reconstruction, by contrast, seeks to move beyond this scheme and incorporate information from multiple frames within the image reconstruction task. Here we propose a novel reconstruction framework aiming to enhance quantitative accuracy of parametric images via introduction of priors based on voxel kinetics, as generated via clustering of preliminary reconstructed dynamic images to define clustered neighborhoods of voxels with similar kinetics. This is then followed by straightforward maximum a posteriori (MAP) 3D PET reconstruction as applied to individual frames; and as such the method is labeled ‘3.5D’ image reconstruction. The use of cluster-based priors has the advantage of further enhancing quantitative performance in dynamic PET imaging, because: (a) there are typically more voxels in clusters than in conventional local neighborhoods, and (b) neighboring voxels with distinct kinetics are less likely to be clustered together. Using realistic simulated 11 C-raclopride dynamic PET data, the quantitative performance of the proposed method was investigated. Parametric distribution-volume (DV) and DV ratio (DVR) images were estimated from dynamic image reconstructions using (a) maximum-likelihood expectation maximization (MLEM), and MAP reconstructions using (b) the quadratic prior (QP-MAP), (c) the Green prior (GP-MAP) and (d, e) two proposed cluster-based priors (CP-U-MAP and CP-W-MAP), followed by graphical modeling, and were qualitatively and quantitatively compared for 11 ROIs. Overall, the proposed dynamic PET reconstruction methodology resulted in substantial visual as well as quantitative accuracy improvements (in terms of noise versus bias performance) for parametric DV

  20. Revisiting a model-independent dark energy reconstruction method

    Energy Technology Data Exchange (ETDEWEB)

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  1. Usefulness of ANN-based model for copper removal from aqueous solutions using agro industrial waste materials

    Directory of Open Access Journals (Sweden)

    Petrović Marija S.

    2015-01-01

    Full Text Available The purpose of this study was to investigate the adsorption properties of locally available lignocelluloses biomaterials as biosorbents for the removal of copper ions from aqueous solution. Materials are generated from juice production (apricot stones and from the corn milling process (corn cob. Such solid wastes have little or no economic value and very often present a disposal problem. Using batch adsorption techniques the effects of initial Cu(II ions concentration (Ci, amount of biomass (m and volume of metal solution (V, on biosorption efficiency and capacity were studied for both materials, without any pre-treatments. The optimal parameters for both biosorbents were selected depending on a highest sorption capability of biosorbent, in removal of Cu(II. Experimental data were compared with second order polynomial regression models (SOPs and artificial neural networks (ANNs. SOPs showed acceptable coefficients of determination (0.842 - 0.997, while ANNs performed high prediction accuracy (0.980-0.986 in comparison to experimental results. [Projekat Ministarstva nauke Republike Srbije, br. TR 31003, TR 31055

  2. Ehe seep Eesti moodi / Anneli Aasmäe

    Index Scriptorium Estoniae

    Aasmäe, Anneli, 1973-

    2008-01-01

    Produtsent Kristian Taska Kalev Spordis näidatav Venezuela seebiseriaali Eesti oludele mugandatud variant "Kalevi naised" : lavastaja Ingomar Vihman : osades Andrus Vaarik, Anne Reemann, Piret Kalda, Ken Saan jt.

  3. Multi-energy CT based on a prior rank, intensity and sparsity model (PRISM)

    International Nuclear Information System (INIS)

    Gao, Hao; Osher, Stanley; Yu, Hengyong; Wang, Ge

    2011-01-01

    We propose a compressive sensing approach for multi-energy computed tomography (CT), namely the prior rank, intensity and sparsity model (PRISM). To further compress the multi-energy image for allowing the reconstruction with fewer CT data and less radiation dose, the PRISM models a multi-energy image as the superposition of a low-rank matrix and a sparse matrix (with row dimension in space and column dimension in energy), where the low-rank matrix corresponds to the stationary background over energy that has a low matrix rank, and the sparse matrix represents the rest of distinct spectral features that are often sparse. Distinct from previous methods, the PRISM utilizes the generalized rank, e.g., the matrix rank of tight-frame transform of a multi-energy image, which offers a way to characterize the multi-level and multi-filtered image coherence across the energy spectrum. Besides, the energy-dependent intensity information can be incorporated into the PRISM in terms of the spectral curves for base materials, with which the restoration of the multi-energy image becomes the reconstruction of the energy-independent material composition matrix. In other words, the PRISM utilizes prior knowledge on the generalized rank and sparsity of a multi-energy image, and intensity/spectral characteristics of base materials. Furthermore, we develop an accurate and fast split Bregman method for the PRISM and demonstrate the superior performance of the PRISM relative to several competing methods in simulations. (papers)

  4. Inverse problems using ANN in long range atmospheric dispersion with signature analysis picked scattered numerical sensors from CFD

    International Nuclear Information System (INIS)

    Sharma, Pavan K.; Gera, B.; Ghosh, A.K.; Kushwaha, H.S.

    2010-01-01

    Scalar dispersion in the atmosphere is an important area wherein different approaches are followed in development of good analytical model. The analyses based on Computational Fluid Dynamics (CFD) codes offer an opportunity of model development based on first principles of physics and hence such models have an edge over the existing models. Both forward and backward calculation methods are being developed for atmospheric dispersion around NPPs at BARC Forward modeling methods, which describe the atmospheric transport from sources to receptors, use forward-running transport and dispersion models or computational fluid dynamics models which are run many times, and the resulting dispersion field is compared to observations from multiple sensors. Backward or inverse modeling methods use only one model run in the reverse direction from the receptors to estimate the upwind sources. Inverse modeling methods include adjoint and tangent linear models, Kalman filters, and variational data assimilation, and neural network. The present paper is aimed at developing a new approach where the identified specific signatures at receptor points form the basis for source estimation or inversions. This approach is expected to reduce the large transient data sets to reduced and meaningful data sets. In fact this reduces the inherently transient data set into a time independent mean data set. Forward computation were carried out with CFD code for various case to generate a large set of data to train the ANN. Specific signature analysis was carried out to find the parameters of interest for ANN training like peak concentration, time to reach peak concentration and time to fall, the ANN was trained with data and source strength and location were predicted from ANN. Inverse problem was performed using ANN approach in long range atmospheric dispersion. An illustration of application of CFD code for atmospheric dispersion studies for a hypothetical case is also included in the paper. (author)

  5. Probing medium-induced energy loss with direct jet reconstruction in p+p and Cu+Cu collisions at PHENIX

    International Nuclear Information System (INIS)

    Lai, Y.-S.

    2009-01-01

    We present the application of a new jet reconstruction algorithm that uses a Gaussian filter to locate and reconstruct the jet energy to p+p and heavy ion data from the PHENIX detector. This algorithm is combined with a fake jet rejection scheme that provides efficient jet reconstruction with an acceptable fake rate. We show our first results on the measured jet spectra, and on jet-jet angular correlation in p+p and Cu+Cu collisions.

  6. SSVEP and ANN based optimal speller design for Brain Computer Interface

    Directory of Open Access Journals (Sweden)

    Irshad Ahmad Ansari

    2015-07-01

    Full Text Available This work put forwards an optimal BCI (Brain Computer Interface speller design based on Steady State Visual Evoked Potentials (SSVEP and Artificial Neural Network (ANN in order to help the people with severe motor impairments. This work is carried out to enhance the accuracy and communication rate of  BCI system. To optimize the BCI system, the work has been divided into two steps: First, designing of an encoding technique to choose characters from the speller interface and the second is the development and implementation of feature extraction algorithm to acquire optimal features, which is used to train the BCI system for classification using neural network. Optimization of speller interface is focused on representation of character matrix and its designing parameters. Then again, a lot of deliberations made in order to optimize selection of features and user’s time window. Optimized system works nearly the same with the new user and gives character per minute (CPM of 13 ± 2 with an average accuracy of 94.5% by choosing first two harmonics of power spectral density as the feature vectors and using the 2 second time window for each selection. Optimized BCI performs better with experienced users with an average accuracy of 95.1%. Such a good accuracy has not been reported before in account of fair enough CPM.DOI: 10.15181/csat.v2i2.1059

  7. Reconstruction for Skull Base Defect Using Fat-Containing Perifascial Areolar Tissue.

    Science.gov (United States)

    Choi, Woo Young; Sung, Ki Wook; Kim, Young Seok; Hong, Jong Won; Roh, Tai Suk; Lew, Dae Hyun; Chang, Jong Hee; Lee, Kyu Sung

    2017-06-01

    Skull base reconstruction is a challenging task. The method depends on the anatomical complexity and size of the defect. We obtained tissue by harvesting fat-containing perifascial areolar tissue (PAT) for reconstruction of limited skull base defects and volume augmentation. We demonstrated the effective option for reconstruction of limited skull base defects and volume augmentation. From October 2013 to November 2015, 5 patients underwent operations using fat-containing PAT to fill the defect in skull base and/or perform volume replacement in the forehead. Perifascial areolar tissue with 5- to 10-mm fat thickness was harvested from the inguinal region. The fat-containing PAT was grafted to the defect contacting the vascularized wound bed. Patients were followed up in terms of their clinical symptoms and postoperative magnetic resonance imaging findings. Four patients were treated using fat-containing PAT after tumor resection. One patient was treated for a posttraumatic forehead depression deformity. The fat-containing PAT included 5- to 9-mm fat thickness in all cases. The mean size of grafted PAT was 65.6 cm (28-140 cm). The mean follow-up period was 18.6 months (12-31 months). There was no notable complication. There was no donor site morbidity. We can harvest PAT with fat easily and obtain the sufficient volume to treat the defect. It also could be used with other reconstructive method, such as a free flap or a regional flap to fill the left dead space. Therefore, fat-containing PAT could be additional options to reconstruction of skull base defect.

  8. Sensor-Topology Based Simplicial Complex Reconstruction from Mobile Laser Scanning

    Science.gov (United States)

    Guinard, S.; Vallet, B.

    2018-05-01

    We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles) from 3D point clouds from Mobile Laser Scanning (MLS). Our main goal is to produce a reconstruction of a scene that is adapted to the local geometry of objects. Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create triangles for each triplet of self-connected edges. Last, we improve this method with a regularization based on the co-planarity of triangles and collinearity of remaining edges. We compare our results to a naive simplicial complexes reconstruction based on edge length.

  9. Temporalis Myofascial Flap for Primary Cranial Base Reconstruction after Tumor Resection

    OpenAIRE

    Eldaly, Ahmed; Magdy, Emad A.; Nour, Yasser A.; Gaafar, Alaa H.

    2008-01-01

    Objective: To evaluate the use of the temporalis myofascial flap in primary cranial base reconstruction following surgical tumor ablation and to explain technical issues, potential complications, and donor site consequences along with their management. Design: Retrospective case series. Setting: Tertiary referral center. Participants: Forty-one consecutive patients receiving primary temporalis myofascial flap reconstructions following cranial base tumor resections in a 4-year period. Main Out...

  10. BP-ANN for fitting the temperature-germination model and its application in predicting sowing time and region for Bermudagrass.

    Directory of Open Access Journals (Sweden)

    Erxu Pi

    Full Text Available Temperature is one of the most significant environmental factors that affects germination of grass seeds. Reliable prediction of the optimal temperature for seed germination is crucial for determining the suitable regions and favorable sowing timing for turf grass cultivation. In this study, a back-propagation-artificial-neural-network-aided dual quintic equation (BP-ANN-QE model was developed to improve the prediction of the optimal temperature for seed germination. This BP-ANN-QE model was used to determine optimal sowing times and suitable regions for three Cynodon dactylon cultivars (C. dactylon, 'Savannah' and 'Princess VII'. Prediction of the optimal temperature for these seeds was based on comprehensive germination tests using 36 day/night (high/low temperature regimes (both ranging from 5/5 to 40/40°C with 5°C increments. Seed germination data from these temperature regimes were used to construct temperature-germination correlation models for estimating germination percentage with confidence intervals. Our tests revealed that the optimal high/low temperature regimes required for all the three bermudagrass cultivars are 30/5, 30/10, 35/5, 35/10, 35/15, 35/20, 40/15 and 40/20°C; constant temperatures ranging from 5 to 40°C inhibited the germination of all three cultivars. While comparing different simulating methods, including DQEM, Bisquare ANN-QE, and BP-ANN-QE in establishing temperature based germination percentage rules, we found that the R(2 values of germination prediction function could be significantly improved from about 0.6940-0.8177 (DQEM approach to 0.9439-0.9813 (BP-ANN-QE. These results indicated that our BP-ANN-QE model has better performance than the rests of the compared models. Furthermore, data of the national temperature grids generated from monthly-average temperature for 25 years were fit into these functions and we were able to map the germination percentage of these C. dactylon cultivars in the national scale

  11. Bivariate Drought Analysis Using Streamflow Reconstruction with Tree Ring Indices in the Sacramento Basin, California, USA

    Directory of Open Access Journals (Sweden)

    Jaewon Kwak

    2016-03-01

    Full Text Available Long-term streamflow data are vital for analysis of hydrological droughts. Using an artificial neural network (ANN model and nine tree-ring indices, this study reconstructed the annual streamflow of the Sacramento River for the period from 1560 to 1871. Using the reconstructed streamflow data, the copula method was used for bivariate drought analysis, deriving a hydrological drought return period plot for the Sacramento River basin. Results showed strong correlation among drought characteristics, and the drought with a 20-year return period (17.2 million acre-feet (MAF per year in the Sacramento River basin could be considered a critical level of drought for water shortages.

  12. The treatment of female stress urinary incontinence: an evidenced-based review

    OpenAIRE

    Cameron, Anne P; Haraway, Allen McNeil

    2011-01-01

    Anne P Cameron, Allen McNeil HarawayDepartment of Urology, Division of Neurourology and Pelvic Floor Reconstruction, University of Michigan Health System, Ann Arbor, MI, USAObjective: To review the literature on the surgical and nonsurgical treatment options for stress urinary incontinence in women, focusing exclusively on randomized clinical trials and high quality meta-analyses.Materials and methods: A computer-aided and manual search for published randomized controlled trials and high qual...

  13. Intensity-based bayesian framework for image reconstruction from sparse projection data

    International Nuclear Information System (INIS)

    Rashed, E.A.; Kudo, Hiroyuki

    2009-01-01

    This paper presents a Bayesian framework for iterative image reconstruction from projection data measured over a limited number of views. The classical Nyquist sampling rule yields the minimum number of projection views required for accurate reconstruction. However, challenges exist in many medical and industrial imaging applications in which the projection data is undersampled. Classical analytical reconstruction methods such as filtered backprojection (FBP) are not a good choice for use in such cases because the data undersampling in the angular range introduces aliasing and streak artifacts that degrade lesion detectability. In this paper, we propose a Bayesian framework for maximum likelihood-expectation maximization (ML-EM)-based iterative reconstruction methods that incorporates a priori knowledge obtained from expected intensity information. The proposed framework is based on the fact that, in tomographic imaging, it is often possible to expect a set of intensity values of the reconstructed object with relatively high accuracy. The image reconstruction cost function is modified to include the l 1 norm distance to the a priori known information. The proposed method has the potential to regularize the solution to reduce artifacts without missing lesions that cannot be expected from the a priori information. Numerical studies showed a significant improvement in image quality and lesion detectability under the condition of highly undersampled projection data. (author)

  14. Reconstructing see-saw models

    International Nuclear Information System (INIS)

    Ibarra, Alejandro

    2007-01-01

    In this talk we discuss the prospects to reconstruct the high-energy see-saw Lagrangian from low energy experiments in supersymmetric scenarios. We show that the model with three right-handed neutrinos could be reconstructed in theory, but not in practice. Then, we discuss the prospects to reconstruct the model with two right-handed neutrinos, which is the minimal see-saw model able to accommodate neutrino observations. We identify the relevant processes to achieve this goal, and comment on the sensitivity of future experiments to them. We find the prospects much more promising and we emphasize in particular the importance of the observation of rare leptonic decays for the reconstruction of the right-handed neutrino masses

  15. China’s primary energy demands in 2020: Predictions from an MPSO–RBF estimation model

    International Nuclear Information System (INIS)

    Yu Shiwei; Wei Yiming; Wang Ke

    2012-01-01

    Highlights: ► A Mix-encoding PSO and RBF network-based energy demand forecasting model is proposed. ► The proposed model has simpler structure and smaller estimated errors than other ANN models. ► China’s energy demand could reach 6.25 billion, 4.16 billion, and 5.29 billion tons tce. ► China’s energy efficiency in 2020 will increase by more than 30% compared with 2009. - Abstract: In the present study, a Mix-encoding Particle Swarm Optimization and Radial Basis Function (MPSO–RBF) network-based energy demand forecasting model is proposed and applied to forecast China’s energy consumption until 2020. The energy demand is analyzed for the period from 1980 to 2009 based on GDP, population, proportion of industry in GDP, urbanization rate, and share of coal energy. The results reveal that the proposed MPSO–RBF based model has fewer hidden nodes and smaller estimated errors compared with other ANN-based estimation models. The average annual growth of China’s energy demand will be 6.70%, 2.81%, and 5.08% for the period between 2010 and 2020 in three scenarios and could reach 6.25 billion, 4.16 billion, and 5.29 billion tons coal equivalent in 2020. Regardless of future scenarios, China’s energy efficiency in 2020 will increase by more than 30% compared with 2009.

  16. Reconstruction and 3D visualisation based on objective real 3D based documentation.

    Science.gov (United States)

    Bolliger, Michael J; Buck, Ursula; Thali, Michael J; Bolliger, Stephan A

    2012-09-01

    Reconstructions based directly upon forensic evidence alone are called primary information. Historically this consists of documentation of findings by verbal protocols, photographs and other visual means. Currently modern imaging techniques such as 3D surface scanning and radiological methods (computer tomography, magnetic resonance imaging) are also applied. Secondary interpretation is based on facts and the examiner's experience. Usually such reconstructive expertises are given in written form, and are often enhanced by sketches. However, narrative interpretations can, especially in complex courses of action, be difficult to present and can be misunderstood. In this report we demonstrate the use of graphic reconstruction of secondary interpretation with supporting pictorial evidence, applying digital visualisation (using 'Poser') or scientific animation (using '3D Studio Max', 'Maya') and present methods of clearly distinguishing between factual documentation and examiners' interpretation based on three cases. The first case involved a pedestrian who was initially struck by a car on a motorway and was then run over by a second car. The second case involved a suicidal gunshot to the head with a rifle, in which the trigger was pushed with a rod. The third case dealt with a collision between two motorcycles. Pictorial reconstruction of the secondary interpretation of these cases has several advantages. The images enable an immediate overview, give rise to enhanced clarity, and compel the examiner to look at all details if he or she is to create a complete image.

  17. Improving head and neck CTA with hybrid and model-based iterative reconstruction techniques

    NARCIS (Netherlands)

    Niesten, J. M.; van der Schaaf, I. C.; Vos, P. C.; Willemink, MJ; Velthuis, B. K.

    2015-01-01

    AIM: To compare image quality of head and neck computed tomography angiography (CTA) reconstructed with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and model-based iterative reconstruction (MIR) algorithms. MATERIALS AND METHODS: The raw data of 34 studies were

  18. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    Science.gov (United States)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed

  19. A Track Reconstructing Low-latency Trigger Processor for High-energy Physics

    CERN Document Server

    AUTHOR|(CDS)2067518

    2009-01-01

    The detection and analysis of the large number of particles emerging from high-energy collisions between atomic nuclei is a major challenge in experimental heavy-ion physics. Efficient trigger systems help to focus the analysis on relevant events. A primary objective of the Transition Radiation Detector of the ALICE experiment at the LHC is to trigger on high-momentum electrons. In this thesis, a trigger processor is presented that employs massive parallelism to perform the required online event reconstruction within 2 µs to contribute to the Level-1 trigger decision. Its three-stage hierarchical architecture comprises 109 nodes based on FPGA technology. Ninety processing nodes receive data from the detector front-end at an aggregate net bandwidth of 2.16 Tbps via 1080 optical links. Using specifically developed components and interconnections, the system combines high bandwidth with minimum latency. The employed tracking algorithm three-dimensionally reassembles the track segments found in the detector's dr...

  20. Comparative study of landslides susceptibility mapping methods: Multi-Criteria Decision Making (MCDM) and Artificial Neural Network (ANN)

    Science.gov (United States)

    Salleh, S. A.; Rahman, A. S. A. Abd; Othman, A. N.; Mohd, W. M. N. Wan

    2018-02-01

    As different approach produces different results, it is crucial to determine the methods that are accurate in order to perform analysis towards the event. This research aim is to compare the Rank Reciprocal (MCDM) and Artificial Neural Network (ANN) analysis techniques in determining susceptible zones of landslide hazard. The study is based on data obtained from various sources such as local authority; Dewan Bandaraya Kuala Lumpur (DBKL), Jabatan Kerja Raya (JKR) and other agencies. The data were analysed and processed using Arc GIS. The results were compared by quantifying the risk ranking and area differential. It was also compared with the zonation map classified by DBKL. The results suggested that ANN method gives better accuracy compared to MCDM with 18.18% higher accuracy assessment of the MCDM approach. This indicated that ANN provides more reliable results and it is probably due to its ability to learn from the environment thus portraying realistic and accurate result.

  1. A Superresolution Image Reconstruction Algorithm Based on Landweber in Electrical Capacitance Tomography

    Directory of Open Access Journals (Sweden)

    Chen Deyun

    2013-01-01

    Full Text Available According to the image reconstruction accuracy influenced by the “soft field” nature and ill-conditioned problems in electrical capacitance tomography, a superresolution image reconstruction algorithm based on Landweber is proposed in the paper, which is based on the working principle of the electrical capacitance tomography system. The method uses the algorithm which is derived by regularization of solutions derived and derives closed solution by fast Fourier transform of the convolution kernel. So, it ensures the certainty of the solution and improves the stability and quality of image reconstruction results. Simulation results show that the imaging precision and real-time imaging of the algorithm are better than Landweber algorithm, and this algorithm proposes a new method for the electrical capacitance tomography image reconstruction algorithm.

  2. New concept of electrical drives for paper and board machines based on energy efficiency principles

    Directory of Open Access Journals (Sweden)

    Jeftenić Borislav

    2006-01-01

    Full Text Available In this paper, it is described how the reconstruction of the facility of paper machine has been conducted, at the press and drying part of the machine in June 2001, as well as the expansion of the Paper Machine with the "third coating" introducing, that has been done in July 2002, in the board factory "Umka". The existing old drive of the press and the drive of drying groups were established as a Line Shaft Drive, 76 m long. The novel drive is developed on the basis of conventional squirrel cage induction motor application, with frequency converter. The system control is carried out with the programmable controller, and the communication between controllers, converters, and control boards is accomplished trough profibus. Reconstruction of the coating part of the machine, during technological reconstruction of this part of the machine, was being conducted with a purpose to improve performance of the machine by adding device for spreading "third coating". The demands for the power facility were to replace existing facility with the new one, based on energy efficiency principles and to provide adequate facility for new technological sections. Also, new part of the facility had to be connected with the remaining part of the machine, i.e. with the press and drying part, which have been reconstructed in 2001. It has to be stressed that energy efficiency principles means to realize new, modernized drive with better performances and greater capacity for the as small as possible amount of increased installed power of separate drives. In the paper are also, graphically presented achieved energy savings results, based on measurements performed on separate parts of paper machine, before and after reconstruction. .

  3. Prepectoral Implant-Based Breast Reconstruction and Postmastectomy Radiotherapy: Short-Term Outcomes

    Directory of Open Access Journals (Sweden)

    Steven Sigalove, MD

    2017-12-01

    Conclusions:. Immediate implant-based prepectoral breast reconstruction followed by PMRT appears to be well tolerated, with no excess risk of adverse outcomes, at least in the short term. Longer follow-up is needed to better understand the risk of PMRT in prepectorally reconstructed breasts.

  4. Structured Light-Based 3D Reconstruction System for Plants

    OpenAIRE

    Nguyen, Thuy Tuong; Slaughter, David C.; Max, Nelson; Maloof, Julin N.; Sinha, Neelima

    2015-01-01

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud regi...

  5. High-SNR spectrum measurement based on Hadamard encoding and sparse reconstruction

    Science.gov (United States)

    Wang, Zhaoxin; Yue, Jiang; Han, Jing; Li, Long; Jin, Yong; Gao, Yuan; Li, Baoming

    2017-12-01

    The denoising capabilities of the H-matrix and cyclic S-matrix based on the sparse reconstruction, employed in the Pixel of Focal Plane Coded Visible Spectrometer for spectrum measurement are investigated, where the spectrum is sparse in a known basis. In the measurement process, the digital micromirror device plays an important role, which implements the Hadamard coding. In contrast with Hadamard transform spectrometry, based on the shift invariability, this spectrometer may have the advantage of a high efficiency. Simulations and experiments show that the nonlinear solution with a sparse reconstruction has a better signal-to-noise ratio than the linear solution and the H-matrix outperforms the cyclic S-matrix whether the reconstruction method is nonlinear or linear.

  6. SENSOR-TOPOLOGY BASED SIMPLICIAL COMPLEX RECONSTRUCTION FROM MOBILE LASER SCANNING

    Directory of Open Access Journals (Sweden)

    S. Guinard

    2018-05-01

    Full Text Available We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles from 3D point clouds from Mobile Laser Scanning (MLS. Our main goal is to produce a reconstruction of a scene that is adapted to the local geometry of objects. Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create triangles for each triplet of self-connected edges. Last, we improve this method with a regularization based on the co-planarity of triangles and collinearity of remaining edges. We compare our results to a naive simplicial complexes reconstruction based on edge length.

  7. A shape-based quality evaluation and reconstruction method for electrical impedance tomography.

    Science.gov (United States)

    Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen

    2015-06-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.

  8. A shape-based quality evaluation and reconstruction method for electrical impedance tomography

    International Nuclear Information System (INIS)

    Antink, Christoph Hoog; Pikkemaat, Robert; Leonhardt, Steffen; Malmivuo, Jaakko

    2015-01-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community.In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed.Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images. (paper)

  9. l1- and l2-Norm Joint Regularization Based Sparse Signal Reconstruction Scheme

    Directory of Open Access Journals (Sweden)

    Chanzi Liu

    2016-01-01

    Full Text Available Many problems in signal processing and statistical inference involve finding sparse solution to some underdetermined linear system of equations. This is also the application condition of compressive sensing (CS which can find the sparse solution from the measurements far less than the original signal. In this paper, we propose l1- and l2-norm joint regularization based reconstruction framework to approach the original l0-norm based sparseness-inducing constrained sparse signal reconstruction problem. Firstly, it is shown that, by employing the simple conjugate gradient algorithm, the new formulation provides an effective framework to deduce the solution as the original sparse signal reconstruction problem with l0-norm regularization item. Secondly, the upper reconstruction error limit is presented for the proposed sparse signal reconstruction framework, and it is unveiled that a smaller reconstruction error than l1-norm relaxation approaches can be realized by using the proposed scheme in most cases. Finally, simulation results are presented to validate the proposed sparse signal reconstruction approach.

  10. Klaas ja mõis / Maie-Ann Raun

    Index Scriptorium Estoniae

    Raun, Maie-Ann, 1938-

    2007-01-01

    Klaasikunstinäitus "Ringkäik" Albu mõisas, kuraatorid Virve Kiil, Kati Kerstna, Kairi Orgusaar. Eksponeeritakse Tiina Sarapu, Mare Saare, Eeva Käsperi, Kai Kiudsoo-Värvi, Pilvi Ojamaa, Merle Bukoveci, Kalli Seina, Viivi-Ann Keerdo, Liisi Junolaineni, Kristiina Uslari, Ivo Lille töid

  11. Prediction of Splitting Tensile Strength of Concrete Containing Zeolite and Diatomite by ANN

    Directory of Open Access Journals (Sweden)

    E. Gülbandılar

    2017-01-01

    Full Text Available This study was designed to investigate with two different artificial neural network (ANN prediction model for the behavior of concrete containing zeolite and diatomite. For purpose of constructing this model, 7 different mixes with 63 specimens of the 28, 56 and 90 days splitting tensile strength experimental results of concrete containing zeolite, diatomite, both zeolite and diatomite used in training and testing for ANN systems was gathered from the tests. The data used in the ANN models are arranged in a format of seven input parameters that cover the age of samples, Portland cement, zeolite, diatomite, aggregate, water and hyper plasticizer and an output parameter which is splitting tensile strength of concrete. In the model, the training and testing results have shown that two different ANN systems have strong potential as a feasible tool for predicting 28, 56 and 90 days the splitting tensile strength of concrete containing zeolite and diatomite.

  12. An energy-based beam hardening model in tomography

    International Nuclear Information System (INIS)

    Casteele, E van de; Dyck, D van; Sijbers, J; Raman, E

    2002-01-01

    As a consequence of the polychromatic x-ray source, used in micro-computer tomography (μCT) and in medical CT, the attenuation is no longer a linear function of absorber thickness. If this nonlinear beam hardening effect is not compensated, the reconstructed images will be corrupted by cupping artefacts. In this paper, a bimodal energy model for the detected energy spectrum is presented, which can be used for reduction of artefacts caused by beam hardening in well-specified conditions. Based on the combination of the spectrum of the source and the detector efficiency, the assumption is made that there are two dominant energies which can describe the system. The validity of the proposed model is examined by fitting the model to the experimental datapoints obtained on a microtomograph for different materials and source voltages

  13. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  14. Quantitative tectonic reconstructions of Zealandia based on crustal thickness estimates

    Science.gov (United States)

    Grobys, Jan W. G.; Gohl, Karsten; Eagles, Graeme

    2008-01-01

    Zealandia is a key piece in the plate reconstruction of Gondwana. The positions of its submarine plateaus are major constraints on the best fit and breakup involving New Zealand, Australia, Antarctica, and associated microplates. As the submarine plateaus surrounding New Zealand consist of extended and highly extended continental crust, classic plate tectonic reconstructions assuming rigid plates and narrow plate boundaries fail to reconstruct these areas correctly. However, if the early breakup history shall be reconstructed, it is crucial to consider crustal stretching in a plate-tectonic reconstruction. We present a reconstruction of the basins around New Zealand (Great South Basin, Bounty Trough, and New Caledonia Basin) based on crustal balancing, an approach that takes into account the rifting and thinning processes affecting continental crust. In a first step, we computed a crustal thickness map of Zealandia using seismic, seismological, and gravity data. The crustal thickness map shows the submarine plateaus to have a uniform crustal thickness of 20-24 km and the basins to have a thickness of 12-16 km. We assumed that a reconstruction of Zealandia should close the basins and lead to a most uniform crustal thickness. We used the standard deviation of the reconstructed crustal thickness as a measure of uniformity. The reconstruction of the Campbell Plateau area shows that the amount of extension in the Bounty Trough and the Great South Basin is far smaller than previously thought. Our results indicate that the extension of the Bounty Trough and Great South Basin occurred simultaneously.

  15. Complications After Mastectomy and Immediate Breast Reconstruction for Breast Cancer: A Claims-Based Analysis

    Science.gov (United States)

    Jagsi, Reshma; Jiang, Jing; Momoh, Adeyiza O.; Alderman, Amy; Giordano, Sharon H.; Buchholz, Thomas A.; Pierce, Lori J.; Kronowitz, Steven J.; Smith, Benjamin D.

    2016-01-01

    Objective To evaluate complications after post-mastectomy breast reconstruction, particularly in the setting of adjuvant radiotherapy. Summary-Background Data Most studies of complications after breast reconstruction have been conducted at centers of excellence; relatively little is known about complication rates in radiated patients treated in the broader community. This information is relevant for breast cancer patients' decision-making. Methods Using the claims-based MarketScan database, we described complications in 14,894 women undergoing mastectomy for breast cancer from 1998-2007 who received immediate autologous reconstruction (n=2637), immediate implant-based reconstruction (n=3007), or no reconstruction within the first two postoperative years (n=9250). We used a generalized estimating equation to evaluate associations between complications and radiotherapy over time. Results Wound complications were diagnosed within the first two postoperative years in 2.3% of patients without reconstruction, 4.4% with implants, and 9.5% with autologous reconstruction (pimplants, and 20.7% with autologous reconstruction (pimplant removal in patients with implant reconstruction (OR 1.48, pbreast reconstruction differ by approach. Radiation therapy appears to modestly increase certain risks, including infection and implant removal. PMID:25876011

  16. Quick and reliable estimation of power distribution in a PHWR by ANN

    International Nuclear Information System (INIS)

    Dubey, B.P.; Jagannathan, V.; Kataria, S.K.

    1998-01-01

    Knowledge of the distribution of power in all the channels of a Pressurised Heavy Water Reactor (PHWR) as a result of a perturbation caused by one or more of the regulating devices is very important from the operation and maintenance point of view of the reactor. Theoretical design codes available for this purpose take several minutes to calculate the channel power distribution on modern PCs. Artificial Neural networks (ANNs) have been employed in predicting channel power distribution of Indian PHWRs for any given configuration of regulating devices of the reactor. ANNs produce the result much faster and with good accuracy. This paper describes the methodology of ANN, its reliability, the validation range, and scope for its possible on-line use in the actual reactor

  17. Analyser-based phase contrast image reconstruction using geometrical optics.

    Science.gov (United States)

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-07-21

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser.

  18. Determination of quantitative tissue composition by iterative reconstruction on 3D DECT volumes

    Energy Technology Data Exchange (ETDEWEB)

    Magnusson, Maria [Linkoeping Univ. (Sweden). Dept. of Electrical Engineering; Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Linkoeping Univ. (Sweden). Center for Medical Image Science and Visualization (CMIV); Malusek, Alexandr [Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Linkoeping Univ. (Sweden). Center for Medical Image Science and Visualization (CMIV); Nuclear Physics Institute AS CR, Prague (Czech Republic). Dept. of Radiation Dosimetry; Muhammad, Arif [Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Carlsson, Gudrun Alm [Linkoeping Univ. (Sweden). Dept. of Medical and Health Sciences, Radiation Physics; Linkoeping Univ. (Sweden). Center for Medical Image Science and Visualization (CMIV)

    2011-07-01

    Quantitative tissue classification using dual-energy CT has the potential to improve accuracy in radiation therapy dose planning as it provides more information about material composition of scanned objects than the currently used methods based on single-energy CT. One problem that hinders successful application of both single- and dual-energy CT is the presence of beam hardening and scatter artifacts in reconstructed data. Current pre- and post-correction methods used for image reconstruction often bias CT attenuation values and thus limit their applicability for quantitative tissue classification. Here we demonstrate simulation studies with a novel iterative algorithm that decomposes every soft tissue voxel into three base materials: water, protein, and adipose. The results demonstrate that beam hardening artifacts can effectively be removed and accurate estimation of mass fractions of each base material can be achieved. Our iterative algorithm starts with calculating parallel projections on two previously reconstructed DECT volumes reconstructed from fan-beam or helical projections with small conebeam angle. The parallel projections are then used in an iterative loop. Future developments include segmentation of soft and bone tissue and subsequent determination of bone composition. (orig.)

  19. The Royal Summer Palace, Ferdinand I and Anne

    Czech Academy of Sciences Publication Activity Database

    Dobalová, Sylva

    2015-01-01

    Roč. 7, č. 2 (2015), s. 162-175 ISSN 1804-1132 Institutional support: RVO:68378033 Keywords : Anne of Jagiello * Prague Castle * Ferdinand I of Habsburg * olive tree * dynasticism Subject RIV: AL - Art, Architecture, Cultural Heritage

  20. Assessing the viability of successful reconstruction of the dynamics of dark energy using varying fundamental couplings

    Energy Technology Data Exchange (ETDEWEB)

    Avelino, P.P., E-mail: ppavelin@fc.up.pt [Centro de Astrofisica da Universidade do Porto, Rua das Estrelas, 4150-762 Porto (Portugal); Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Losano, L., E-mail: losano@fisica.ufpb.br [Departamento de Fisica, Universidade Federal da Paraiba, 58051-970 Joao Pessoa, Paraiba (Brazil); Centro de Fisica do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Menezes, R., E-mail: rmenezes@dce.ufpb.br [Departamento de Ciencias Exatas, Universidade Federal da Paraiba, 58297-000 Rio Tinto, PB (Brazil); Departamento de Fisica, Universidade Federal de Campina Grande, 58109-970 Campina Grande, Paraiba (Brazil); Oliveira, J.C.R.E., E-mail: jespain@fe.up.pt [Centro de Fisica do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Departamento de Engenharia Fisica da Faculdade de Engenharia da Universidade do Porto, Rua Dr. Roberto Frias, s/n, 4200-465 Porto (Portugal)

    2012-10-31

    We assess the viability of successful reconstruction of the evolution of the dark energy equation of state using varying fundamental couplings, such as the fine structure constant or the proton-to-electron mass ratio. We show that the same evolution of the dark energy equation of state parameter with cosmic time may be associated with arbitrary variations of the fundamental couplings. Various examples of models with the same (different) background evolution and different (the same) time variation of fundamental couplings are studied in the Letter. Although we demonstrate that, for a broad family of models, it is possible to redefine the scalar field in such a way that its dynamics is that of a standard quintessence scalar field, in general such redefinition leads to the breakdown of the linear relation between the scalar field and the variation of fundamental couplings. This implies that the assumption of a linear coupling is not sufficient to guarantee a successful reconstruction of the dark energy dynamics and consequently additional model dependent assumptions about the scalar field responsible for the dark energy need to be made.

  1. Evaluation of knowledge-based reconstruction for magnetic resonance volumetry of the right ventricle in tetralogy of Fallot

    International Nuclear Information System (INIS)

    Nyns, Emile Christian Arie; Dragulescu, Andreea; Yoo, Shi-Joon; Grosse-Wortmann, Lars

    2014-01-01

    Cardiac magnetic resonance using the Simpson method is the gold standard for right ventricular volumetry. However, this method is time-consuming and not without sources of error. Knowledge-based reconstruction is a novel post-processing approach that reconstructs the right ventricular endocardial shape based on anatomical landmarks and a database of various right ventricular configurations. To assess the feasibility, accuracy and labor intensity of knowledge-based reconstruction in repaired tetralogy of Fallot (TOF). The short-axis cine cardiac MR datasets of 35 children and young adults (mean age 14.4 ± 2.5 years) after TOF repair were studied using both knowledge-based reconstruction and the Simpson method. Intraobserver, interobserver and inter-method variability were assessed using Bland-Altman analyses. Knowledge-based reconstruction was feasible and highly accurate as compared to the Simpson method. Intra- and inter-method variability for knowledge-based reconstruction measurements showed good agreement. Volumetric assessment using knowledge-based reconstruction was faster when compared with the Simpson method (10.9 ± 2.0 vs. 7.1 ± 2.4 min, P < 0.001). In patients with repaired tetralogy of Fallot, knowledge-based reconstruction is a feasible, accurate and reproducible method for measuring right ventricular volumes and ejection fraction. The post-processing time of right ventricular volumetry using knowledge-based reconstruction was significantly shorter when compared with the routine Simpson method. (orig.)

  2. Evaluation of knowledge-based reconstruction for magnetic resonance volumetry of the right ventricle in tetralogy of Fallot

    Energy Technology Data Exchange (ETDEWEB)

    Nyns, Emile Christian Arie; Dragulescu, Andreea [University of Toronto, The Labatt Family Heart Centre, The Hospital for Sick Children, Toronto (Canada); Yoo, Shi-Joon; Grosse-Wortmann, Lars [University of Toronto, The Labatt Family Heart Centre, The Hospital for Sick Children, Toronto (Canada); University of Toronto, Department of Diagnostic Imaging, The Hospital for Sick Children, Toronto (Canada)

    2014-12-15

    Cardiac magnetic resonance using the Simpson method is the gold standard for right ventricular volumetry. However, this method is time-consuming and not without sources of error. Knowledge-based reconstruction is a novel post-processing approach that reconstructs the right ventricular endocardial shape based on anatomical landmarks and a database of various right ventricular configurations. To assess the feasibility, accuracy and labor intensity of knowledge-based reconstruction in repaired tetralogy of Fallot (TOF). The short-axis cine cardiac MR datasets of 35 children and young adults (mean age 14.4 ± 2.5 years) after TOF repair were studied using both knowledge-based reconstruction and the Simpson method. Intraobserver, interobserver and inter-method variability were assessed using Bland-Altman analyses. Knowledge-based reconstruction was feasible and highly accurate as compared to the Simpson method. Intra- and inter-method variability for knowledge-based reconstruction measurements showed good agreement. Volumetric assessment using knowledge-based reconstruction was faster when compared with the Simpson method (10.9 ± 2.0 vs. 7.1 ± 2.4 min, P < 0.001). In patients with repaired tetralogy of Fallot, knowledge-based reconstruction is a feasible, accurate and reproducible method for measuring right ventricular volumes and ejection fraction. The post-processing time of right ventricular volumetry using knowledge-based reconstruction was significantly shorter when compared with the routine Simpson method. (orig.)

  3. The nuclear: energy and environmental stakes and political and strategic context

    International Nuclear Information System (INIS)

    Lauvergeon, A.

    2003-01-01

    This document mentions the intervention of Anne Lauvergeon, at the colloquium Adapes, ''the nuclear: energy and environmental stakes and political and geo-strategic context''. Anne Lauvergeon is president of the Areva board. This speech takes stock on the energy resources and demand facing the economic development in a context of an environmental quality and especially the part of the nuclear energy in the future. (A.L.B.)

  4. ANN Surface Roughness Optimization of AZ61 Magnesium Alloy Finish Turning: Minimum Machining Times at Prime Machining Costs

    Directory of Open Access Journals (Sweden)

    Adel Taha Abbas

    2018-05-01

    Full Text Available Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth–Pareto optimization of an artificial neural network (ANN is presented in this paper for surface roughness (Ra prediction of one component in computer numerical control (CNC turning over minimal machining time (Tm and at prime machining costs (C. An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP, to predict Ra, Tm, and C, in relation to cutting speed, vc, depth of cut, ap, and feed per revolution, fr. For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values vc, ap, and fr. The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, Tm = 0.358 min/cm3, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed vc = 250 m/min, cutting depth ap = 1.0 mm, and feed per revolution fr = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness.

  5. Comparison of ANN and RKS approaches to model SCC strength

    Science.gov (United States)

    Prakash, Aravind J.; Sathyan, Dhanya; Anand, K. B.; Aravind, N. R.

    2018-02-01

    Self compacting concrete (SCC) is a high performance concrete that has high flowability and can be used in heavily reinforced concrete members with minimal compaction segregation and bleeding. The mix proportioning of SCC is highly complex and large number of trials are required to get the mix with the desired properties resulting in the wastage of materials and time. The research on SCC has been highly empirical and no theoretical relationships have been developed between the mixture proportioning and engineering properties of SCC. In this work effectiveness of artificial neural network (ANN) and random kitchen sink algorithm(RKS) with regularized least square algorithm(RLS) in predicting the split tensile strength of the SCC is analysed. Random kitchen sink algorithm is used for mapping data to higher dimension and classification of this data is done using Regularized least square algorithm. The training and testing data for the algorithm was obtained experimentally using standard test procedures and materials available. Total of 40 trials were done which were used as the training and testing data. Trials were performed by varying the amount of fine aggregate, coarse aggregate, dosage and type of super plasticizer and water. Prediction accuracy of the ANN and RKS model is checked by comparing the RMSE value of both ANN and RKS. Analysis shows that eventhough the RKS model is good for large data set, its prediction accuracy is as good as conventional prediction method like ANN so the split tensile strength model developed by RKS can be used in industries for the proportioning of SCC with tailor made property.

  6. Reconstruction of the cranial base in surgery for jugular foramen tumors.

    Science.gov (United States)

    Ramina, Ricardo; Maniglia, Joao J; Paschoal, Jorge R; Fernandes, Yvens B; Neto, Mauricio Coelho; Honorato, Donizeti C

    2005-04-01

    The surgical removal of a jugular foramen (JF) tumor presents the neurosurgeon with a complex management problem that requires an understanding of the natural history, diagnosis, surgical approaches, and postoperative complications. Cerebrospinal fluid (CSF) leakage is one of the most common complications of this surgery. Different surgical approaches and management concepts to avoid this complication have been described, mainly in the ear, nose, and throat literature. The purpose of this study was to review the results of CSF leakage prevention in a series of 66 patients with JF tumors operated on by a multidisciplinary cranial base team using a new technique for cranial base reconstruction. We retrospectively studied 66 patients who had JF tumors with intracranial extension and who underwent surgical treatment in our institutions from January 1987 to December 2001. Paragangliomas were the most frequent lesions, followed by schwannomas and meningiomas. All patients were operated on using the same multidisciplinary surgical approach (neurosurgeons and ear, nose, and throat surgeons). A surgical strategy for reconstruction of the cranial base using vascularized flaps was carried out. The closure of the surgical wound was performed in three layers. A specially developed myofascial flap (temporalis fascia, cervical fascia, and sternocleidomastoid muscle) associated to the inferior rotation of the posterior portion of the temporalis muscle was used to reconstruct the cranial base with vascularized flaps. In this series of 66 patients, postoperative CSF leakage developed in three cases. These patients presented with very large or recurrent tumors, and the postoperative CSF fistulae were surgically closed. The cosmetic result obtained with this reconstruction was classified as excellent or good in all patients. Our results compare favorably with those reported in the literature. The surgical strategy used for cranial base reconstruction presented in this article has

  7. Ranking of tree-ring based temperature reconstructions of the past millennium

    Science.gov (United States)

    Esper, Jan; Krusic, Paul J.; Ljungqvist, Fredrik C.; Luterbacher, Jürg; Carrer, Marco; Cook, Ed; Davi, Nicole K.; Hartl-Meier, Claudia; Kirdyanov, Alexander; Konter, Oliver; Myglan, Vladimir; Timonen, Mauri; Treydte, Kerstin; Trouet, Valerie; Villalba, Ricardo; Yang, Bao; Büntgen, Ulf

    2016-08-01

    Tree-ring chronologies are widely used to reconstruct high-to low-frequency variations in growing season temperatures over centuries to millennia. The relevance of these timeseries in large-scale climate reconstructions is often determined by the strength of their correlation against instrumental temperature data. However, this single criterion ignores several important quantitative and qualitative characteristics of tree-ring chronologies. Those characteristics are (i) data homogeneity, (ii) sample replication, (iii) growth coherence, (iv) chronology development, and (v) climate signal including the correlation with instrumental data. Based on these 5 characteristics, a reconstruction-scoring scheme is proposed and applied to 39 published, millennial-length temperature reconstructions from Asia, Europe, North America, and the Southern Hemisphere. Results reveal no reconstruction scores highest in every category and each has their own strengths and weaknesses. Reconstructions that perform better overall include N-Scan and Finland from Europe, E-Canada from North America, Yamal and Dzhelo from Asia. Reconstructions performing less well include W-Himalaya and Karakorum from Asia, Tatra and S-Finland from Europe, and Great Basin from North America. By providing a comprehensive set of criteria to evaluate tree-ring chronologies we hope to improve the development of large-scale temperature reconstructions spanning the past millennium. All reconstructions and their corresponding scores are provided at http://www.blogs.uni-mainz.de/fb09climatology.

  8. Turkey's net energy consumption

    International Nuclear Information System (INIS)

    Soezen, Adnan; Arcaklioglu, Erol; Oezkaymak, Mehmet

    2005-01-01

    The main goal of this study is to develop the equations for forecasting net energy consumption (NEC) using an artificial neural-network (ANN) technique in order to determine the future level of energy consumption in Turkey. In this study, two different models were used in order to train the neural network. In one of them, population, gross generation, installed capacity and years are used in the input layer of the network (Model 1). Other energy sources are used in input layer of network (Model 2). The net energy consumption is in the output layer for two models. Data from 1975 to 2003 are used for the training. Three years (1981, 1994 and 2003) are used only as test data to confirm this method. The statistical coefficients of multiple determinations (R 2 -value) for training data are equal to 0.99944 and 0.99913 for Models 1 and 2, respectively. Similarly, R 2 values for testing data are equal to 0.997386 and 0.999558 for Models 1 and 2, respectively. According to the results, the net energy consumption using the ANN technique has been predicted with acceptable accuracy. Apart from reducing the whole time required, with the ANN approach, it is possible to find solutions that make energy applications more viable and thus more attractive to potential users. It is also expected that this study will be helpful in developing highly applicable energy policies

  9. MO-FG-204-08: Optimization-Based Image Reconstruction From Unevenly Distributed Sparse Projection Views

    International Nuclear Information System (INIS)

    Xie, Huiqiao; Yang, Yi; Tang, Xiangyang; Niu, Tianye; Ren, Yi

    2015-01-01

    Purpose: Optimization-based reconstruction has been proposed and investigated for reconstructing CT images from sparse views, as such the radiation dose can be substantially reduced while maintaining acceptable image quality. The investigation has so far focused on reconstruction from evenly distributed sparse views. Recognizing the clinical situations wherein only unevenly sparse views are available, e.g., image guided radiation therapy, CT perfusion and multi-cycle cardiovascular imaging, we investigate the performance of optimization-based image reconstruction from unevenly sparse projection views in this work. Methods: The investigation is carried out using the FORBILD and an anthropomorphic head phantoms. In the study, 82 views, which are evenly sorted out from a full (360°) axial CT scan consisting of 984 views, form sub-scan I. Another 82 views are sorted out in a similar manner to form sub-scan II. As such, a CT scan with sparse (164) views at 1:6 ratio are formed. By shifting the two sub-scans relatively in view angulation, a CT scan with unevenly distributed sparse (164) views at 1:6 ratio are formed. An optimization-based method is implemented to reconstruct images from the unevenly distributed views. By taking the FBP reconstruction from the full scan (984 views) as the reference, the root mean square (RMS) between the reference and the optimization-based reconstruction is used to evaluate the performance quantitatively. Results: In visual inspection, the optimization-based method outperforms the FBP substantially in the reconstruction from unevenly distributed, which are quantitatively verified by the RMS gauged globally and in ROIs in both the FORBILD and anthropomorphic head phantoms. The RMS increases with increasing severity in the uneven angular distribution, especially in the case of anthropomorphic head phantom. Conclusion: The optimization-based image reconstruction can save radiation dose up to 12-fold while providing acceptable image quality

  10. Ilmus artiklikogumik "Eesti teadlased paguluses" / Anne Valmas

    Index Scriptorium Estoniae

    Valmas, Anne, 1941-2017

    2009-01-01

    TLÜ AR väliseesti kirjanduse keskuse ja TTÜ Raamatukogu koostöös 24.03.2009 toimunud konverentsist "Eesti teadlased paguluses", mis tutvustas väliseesti teadlaste osa maailmateaduses. Ettekannete põhjal valminud artiklikogumikust "Eesti teadlased paguluses", koostajad Vahur Mägi ja Anne Valmas. Tallinn : Tallinna Ülikooli Kirjastus, 2009

  11. Reducing the effects of metal artefact using high keV monoenergetic reconstruction of dual energy CT (DECT) in hip replacements

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Mark [Norfolk and Norwich University Hospital, Norwich (United Kingdom); Norwich Radiology Academy, Norwich (United Kingdom); Reid, Karen [Norfolk and Norwich University Hospital, Norwich (United Kingdom); Toms, Andoni P. [Norfolk and Norwich University Hospital and University of East Anglia, Norwich (United Kingdom)

    2013-02-15

    The aim of this study was to determine whether high keV monoenergetic reconstruction of dual energy computed tomography (DECT) could be used to overcome the effects of beam hardening artefact that arise from preferential deflection of low energy photons. Two phantoms were used: a Charnley total hip replacement set in gelatine and a Catphan 500. DECT datasets were acquired at 100, 200 and 400 mA (Siemens Definition Flash, 100 and 140 kVp) and reconstructed using a standard combined algorithm (1:1) and then as monoenergetic reconstructions at 10 keV intervals from 40 to 190 keV. Semi-automated segmentation with threshold inpainting was used to obtain the attenuation values and standard deviation (SD) of the streak artefact. High contrast line pair resolution and background noise were assessed using the Catphan 500. Streak artefact is progressively reduced with increasing keV monoenergetic reconstructions. Reconstruction of a 400 mA acquisition at 150 keV results in reduction in the volume of streak artefact from 65 cm{sup 3} to 17 cm{sup 3} (74 %). There was a decrease in the contrast to noise ratio (CNR) at higher tube voltages, with the peak CNR seen at 70-80 keV. High contrast spatial resolution was maintained at high keV values. Monoenergetic reconstruction of dual energy CT at increasing theoretical kilovoltages reduces the streak artefact produced by beam hardening from orthopaedic prostheses, accompanied by a modest increase in heterogeneity of background image attenuation, and decrease in contrast to noise ratio, but no deterioration in high contrast line pair resolution. (orig.)

  12. Autocorrelation based reconstruction of two-dimensional binary objects

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.; Castaneda, R.

    2005-10-01

    A method for reconstructing two-dimensional binary objects from its autocorrelation function is discussed. The objects consist of a finite set of identical elements. The reconstruction algorithm is based on the concept of class of element pairs, defined as the set of element pairs with the same separation vector. This concept allows to solve the redundancy introduced by the element pairs of each class. It is also shown that different objects, consisting of an equal number of elements and the same classes of pairs, provide Fraunhofer diffraction patterns with identical intensity distributions. However, the method predicts all the possible objects that produce the same Fraunhofer pattern. (author)

  13. Perbandingan Metode ANN-PSO Dan ANN-GA Dalam Pemodelan Komposisi Pakan Kambing Peranakan Etawa (PE Untuk Optimasi Kandungan Gizi

    Directory of Open Access Journals (Sweden)

    Canny Amerilyse Caesar

    2016-09-01

    Abstract Milk is one of the animal protein sources which it contains all of the substances needed by human body. The main milk producer cattle in Indonesia is dairy cow, however its milk production has not fulfilled the society needs. The alternative is the goat, the Etawa crossbreed (PE. The high quality of milk nutrients content is greatly influenced by some factors one of them, is the food factor. The PE goat livestock division of the UPT Cattle Breeding and the Cattle Food Greenery in Singosari-Malang still faces the problem, it is the low ability in giving the food composition for PE goat. This flaw affects the quality of the produced milk. It needs the artificial science of the milk nutrients contain in order to determine the food composition to produce premium milk with the optimum nutrients contain. The writer uses the method of the Artificial Neural Network (ANN and the Particle Swarm Optimization (PSO to make the modeling of goat food in optimizing the content of goat milk nutrients. In the analysis of the examination that is done with the case of 36 kg goat weight, also the food type used is the 70 % Odot grass and 30% Raja grass can increase the nutrients contain of the protein milk for 0.707% and decrease the fat nutrients contain for 0.879%. If uses the method of Artificial Neural Network (ANN and Genethic Algorithm (GA can increase the nutriens contain of the protein for 0.0852% and decrease the fat nutients contain for 2.3254%.   Key Words : Goat Milk, Optimization, Artificial Neural Network (ANN, Particle Swarm Optimization (PSO, Genetic Algorithm (GA, the food nutrients contain.

  14. Eesti NATO ukselävel / Mari-Ann Kelam

    Index Scriptorium Estoniae

    Kelam, Mari-Ann, 1946-

    2002-01-01

    Seda, et NATO liitumisläbirääkimistele kutsutavate seas on ka Eesti, saab veel tänagi pidada üheks meie iseseisva riikluse suursaavutuseks, kui mitte imeks, kirjutab Riigikogu liige Mari-Ann Kelam. Autor: Isamaaliit. Parlamendisaadik

  15. Igal mõisal on oma lugu / Mari-Ann Remmel

    Index Scriptorium Estoniae

    Remmel, Mari-Ann

    2008-01-01

    Kirjastuselt "Tänapäev" ilmus raamat "Mõisalegendid. Harjumaa", koostaja Mari-Ann Remmel, kujundaja Angelika Schneider. Kogumik sisaldab ka ajaloolist ning genealoogilist teavet mõisahoonete ning -omanike kohta

  16. 3D reconstruction based on compressed-sensing (CS)-based framework by using a dental panoramic detector.

    Science.gov (United States)

    Je, U K; Cho, H M; Hong, D K; Cho, H S; Park, Y O; Park, C K; Kim, K S; Lim, H W; Kim, G A; Park, S Y; Woo, T H; Cho, S I

    2016-01-01

    In this work, we propose a practical method that can combine the two functionalities of dental panoramic and cone-beam CT (CBCT) features in one by using a single panoramic detector. We implemented a CS-based reconstruction algorithm for the proposed method and performed a systematic simulation to demonstrate its viability for 3D dental X-ray imaging. We successfully reconstructed volumetric images of considerably high accuracy by using a panoramic detector having an active area of 198.4 mm × 6.4 mm and evaluated the reconstruction quality as a function of the pitch (p) and the angle step (Δθ). Our simulation results indicate that the CS-based reconstruction almost completely recovered the phantom structures, as in CBCT, for p≤2.0 and θ≤6°, indicating that it seems very promising for accurate image reconstruction even for large-pitch and few-view data. We expect the proposed method to be applicable to developing a cost-effective, volumetric dental X-ray imaging system. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  17. An ab initio approach to free-energy reconstruction using logarithmic mean force dynamics

    International Nuclear Information System (INIS)

    Nakamura, Makoto; Obata, Masao; Morishita, Tetsuya; Oda, Tatsuki

    2014-01-01

    We present an ab initio approach for evaluating a free energy profile along a reaction coordinate by combining logarithmic mean force dynamics (LogMFD) and first-principles molecular dynamics. The mean force, which is the derivative of the free energy with respect to the reaction coordinate, is estimated using density functional theory (DFT) in the present approach, which is expected to provide an accurate free energy profile along the reaction coordinate. We apply this new method, first-principles LogMFD (FP-LogMFD), to a glycine dipeptide molecule and reconstruct one- and two-dimensional free energy profiles in the framework of DFT. The resultant free energy profile is compared with that obtained by the thermodynamic integration method and by the previous LogMFD calculation using an empirical force-field, showing that FP-LogMFD is a promising method to calculate free energy without empirical force-fields

  18. [Sensitivity analysis of AnnAGNPS model's hydrology and water quality parameters based on the perturbation analysis method].

    Science.gov (United States)

    Xi, Qing; Li, Zhao-Fu; Luo, Chuan

    2014-05-01

    Sensitivity analysis of hydrology and water quality parameters has a great significance for integrated model's construction and application. Based on AnnAGNPS model's mechanism, terrain, hydrology and meteorology, field management, soil and other four major categories of 31 parameters were selected for the sensitivity analysis in Zhongtian river watershed which is a typical small watershed of hilly region in the Taihu Lake, and then used the perturbation method to evaluate the sensitivity of the parameters to the model's simulation results. The results showed that: in the 11 terrain parameters, LS was sensitive to all the model results, RMN, RS and RVC were generally sensitive and less sensitive to the output of sediment but insensitive to the remaining results. For hydrometeorological parameters, CN was more sensitive to runoff and sediment and relatively sensitive for the rest results. In field management, fertilizer and vegetation parameters, CCC, CRM and RR were less sensitive to sediment and particulate pollutants, the six fertilizer parameters (FR, FD, FID, FOD, FIP, FOP) were particularly sensitive for nitrogen and phosphorus nutrients. For soil parameters, K is quite sensitive to all the results except the runoff, the four parameters of the soil's nitrogen and phosphorus ratio (SONR, SINR, SOPR, SIPR) were less sensitive to the corresponding results. The simulation and verification results of runoff in Zhongtian watershed show a good accuracy with the deviation less than 10% during 2005- 2010. Research results have a direct reference value on AnnAGNPS model's parameter selection and calibration adjustment. The runoff simulation results of the study area also proved that the sensitivity analysis was practicable to the parameter's adjustment and showed the adaptability to the hydrology simulation in the Taihu Lake basin's hilly region and provide reference for the model's promotion in China.

  19. Renal Cyst Pseudoenhancement: Intraindividual Comparison Between Virtual Monochromatic Spectral Images and Conventional Polychromatic 120-kVp Images Obtained During the Same CT Examination and Comparisons Among Images Reconstructed Using Filtered Back Projection, Adaptive Statistical Iterative Reconstruction, and Model-Based Iterative Reconstruction

    Science.gov (United States)

    Yamada, Yoshitake; Yamada, Minoru; Sugisawa, Koichi; Akita, Hirotaka; Shiomi, Eisuke; Abe, Takayuki; Okuda, Shigeo; Jinzaki, Masahiro

    2015-01-01

    Abstract The purpose of this study was to compare renal cyst pseudoenhancement between virtual monochromatic spectral (VMS) and conventional polychromatic 120-kVp images obtained during the same abdominal computed tomography (CT) examination and among images reconstructed using filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model-based iterative reconstruction (MBIR). Our institutional review board approved this prospective study; each participant provided written informed consent. Thirty-one patients (19 men, 12 women; age range, 59–85 years; mean age, 73.2 ± 5.5 years) with renal cysts underwent unenhanced 120-kVp CT followed by sequential fast kVp-switching dual-energy (80/140 kVp) and 120-kVp abdominal enhanced CT in the nephrographic phase over a 10-cm scan length with a random acquisition order and 4.5-second intervals. Fifty-one renal cysts (maximal diameter, 18.0 ± 14.7 mm [range, 4–61 mm]) were identified. The CT attenuation values of the cysts as well as of the kidneys were measured on the unenhanced images, enhanced VMS images (at 70 keV) reconstructed using FBP and ASIR from dual-energy data, and enhanced 120-kVp images reconstructed using FBP, ASIR, and MBIR. The results were analyzed using the mixed-effects model and paired t test with Bonferroni correction. The attenuation increases (pseudoenhancement) of the renal cysts on the VMS images reconstructed using FBP/ASIR (least square mean, 5.0/6.0 Hounsfield units [HU]; 95% confidence interval, 2.6–7.4/3.6–8.4 HU) were significantly lower than those on the conventional 120-kVp images reconstructed using FBP/ASIR/MBIR (least square mean, 12.1/12.8/11.8 HU; 95% confidence interval, 9.8–14.5/10.4–15.1/9.4–14.2 HU) (all P < .001); on the other hand, the CT attenuation values of the kidneys on the VMS images were comparable to those on the 120-kVp images. Regardless of the reconstruction algorithm, 70-keV VMS images showed

  20. Reconstructing the dark sector interaction with LISA

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Rong-Gen; Yang, Tao [CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100190 (China); Tamanini, Nicola, E-mail: cairg@itp.ac.cn, E-mail: nicola.tamanini@cea.fr, E-mail: yangtao@itp.ac.cn [Institut de Physique Théorique, CEA-Saclay, CNRS UMR 3681, Université Paris-Saclay, F-91191 Gif-sur-Yvette (France)

    2017-05-01

    We perform a forecast analysis of the ability of the LISA space-based interferometer to reconstruct the dark sector interaction using gravitational wave standard sirens at high redshift. We employ Gaussian process methods to reconstruct the distance-redshift relation in a model independent way. We adopt simulated catalogues of standard sirens given by merging massive black hole binaries visible by LISA, with an electromagnetic counterpart detectable by future telescopes. The catalogues are based on three different astrophysical scenarios for the evolution of massive black hole mergers based on the semi-analytic model of E. Barausse, Mon. Not. Roy. Astron. Soc. 423 (2012) 2533. We first use these standard siren datasets to assess the potential of LISA in reconstructing a possible interaction between vacuum dark energy and dark matter. Then we combine the LISA cosmological data with supernovae data simulated for the Dark Energy Survey. We consider two scenarios distinguished by the time duration of the LISA mission: 5 and 10 years. Using only LISA standard siren data, the dark sector interaction can be well reconstructed from redshift z ∼1 to z ∼3 (for a 5 years mission) and z ∼1 up to z ∼5 (for a 10 years mission), though the reconstruction is inefficient at lower redshift. When combined with the DES datasets, the interaction is well reconstructed in the whole redshift region from 0 z ∼ to z ∼3 (5 yr) and z ∼0 to z ∼5 (10 yr), respectively. Massive black hole binary standard sirens can thus be used to constrain the dark sector interaction at redshift ranges not reachable by usual supernovae datasets which probe only the z ∼< 1.5 range. Gravitational wave standard sirens will not only constitute a complementary and alternative way, with respect to familiar electromagnetic observations, to probe the cosmic expansion, but will also provide new tests to constrain possible deviations from the standard ΛCDM dynamics, especially at high redshift.

  1. FE-ANN based modeling of 3D Simple Reinforced Concrete Girders for Objective Structural Health Evaluation : Tech Transfer Summary

    Science.gov (United States)

    2017-06-01

    The objective of this study was to develop an objective, quantitative method for evaluating damage to bridge girders by using artificial neural networks (ANNs). This evaluation method, which is a supplement to visual inspection, requires only the res...

  2. Color Doppler Ultrasonography-Targeted Perforator Mapping and Angiosome-Based Flap Reconstruction

    DEFF Research Database (Denmark)

    Gunnarsson, Gudjon Leifur; Tei, Troels; Thomsen, Jørn Bo

    2016-01-01

    Knowledge about perforators and angiosomes has inspired new and innovative flap designs for reconstruction of defects throughout the body. The purpose of this article is to share our experience using color Doppler ultrasonography (CDU)-targeted perforator mapping and angiosome-based flap reconstr......Knowledge about perforators and angiosomes has inspired new and innovative flap designs for reconstruction of defects throughout the body. The purpose of this article is to share our experience using color Doppler ultrasonography (CDU)-targeted perforator mapping and angiosome-based flap...

  3. Interleaved EPI diffusion imaging using SPIRiT-based reconstruction with virtual coil compression.

    Science.gov (United States)

    Dong, Zijing; Wang, Fuyixue; Ma, Xiaodong; Zhang, Zhe; Dai, Erpeng; Yuan, Chun; Guo, Hua

    2018-03-01

    To develop a novel diffusion imaging reconstruction framework based on iterative self-consistent parallel imaging reconstruction (SPIRiT) for multishot interleaved echo planar imaging (iEPI), with computation acceleration by virtual coil compression. As a general approach for autocalibrating parallel imaging, SPIRiT improves the performance of traditional generalized autocalibrating partially parallel acquisitions (GRAPPA) methods in that the formulation with self-consistency is better conditioned, suggesting SPIRiT to be a better candidate in k-space-based reconstruction. In this study, a general SPIRiT framework is adopted to incorporate both coil sensitivity and phase variation information as virtual coils and then is applied to 2D navigated iEPI diffusion imaging. To reduce the reconstruction time when using a large number of coils and shots, a novel shot-coil compression method is proposed for computation acceleration in Cartesian sampling. Simulations and in vivo experiments were conducted to evaluate the performance of the proposed method. Compared with the conventional coil compression, the shot-coil compression achieved higher compression rates with reduced errors. The simulation and in vivo experiments demonstrate that the SPIRiT-based reconstruction outperformed the existing method, realigned GRAPPA, and provided superior images with reduced artifacts. The SPIRiT-based reconstruction with virtual coil compression is a reliable method for high-resolution iEPI diffusion imaging. Magn Reson Med 79:1525-1531, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  5. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning.

    Science.gov (United States)

    Zhang, Shang; Dong, Yuhan; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-02-22

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer.

  6. Reconstruction of pressure sores with perforator-based propeller flaps.

    Science.gov (United States)

    Jakubietz, Rafael G; Jakubietz, Danni F; Zahn, Robert; Schmidt, Karsten; Meffert, Rainer H; Jakubietz, Michael G

    2011-03-01

    Perforator flaps have been successfully used for reconstruction of pressure sores. Although V-Y advancement flaps approximate debrided wound edges, perforator-based propeller flaps allow rotation of healthy tissue into the defect. Perforator-based propeller flaps were planned in 13 patients. Seven pressure sores were over the sacrum, five over the ischial tuberosity, and one on the tip of the scapula. Three patients were paraplegic, six were bedridden, and five were ambulatory. In three patients, no perforators were found. In 10 patients, propeller flaps were transferred. In two patients, total flap necrosis occurred, which was reconstructed with local advancement flaps. In two cases, a wound dehiscence occurred and had to be revised. One hematoma required evacuation. No further complications were noted. No recurrence at the flap site occurred. Local perforator flaps allow closure of pressure sores without harvesting muscle. The propeller version has the added benefit of transferring tissue from a distant site, avoiding reapproximation of original wound edges. Twisting of the pedicle may cause torsion and venous obstruction. This can be avoided by dissecting a pedicle of at least 3 cm. Propeller flaps are a safe option for soft tissue reconstruction of pressure sores. © Thieme Medical Publishers.

  7. Analyser-based phase contrast image reconstruction using geometrical optics

    International Nuclear Information System (INIS)

    Kitchen, M J; Pavlov, K M; Siu, K K W; Menk, R H; Tromba, G; Lewis, R A

    2007-01-01

    Analyser-based phase contrast imaging can provide radiographs of exceptional contrast at high resolution (<100 μm), whilst quantitative phase and attenuation information can be extracted using just two images when the approximations of geometrical optics are satisfied. Analytical phase retrieval can be performed by fitting the analyser rocking curve with a symmetric Pearson type VII function. The Pearson VII function provided at least a 10% better fit to experimentally measured rocking curves than linear or Gaussian functions. A test phantom, a hollow nylon cylinder, was imaged at 20 keV using a Si(1 1 1) analyser at the ELETTRA synchrotron radiation facility. Our phase retrieval method yielded a more accurate object reconstruction than methods based on a linear fit to the rocking curve. Where reconstructions failed to map expected values, calculations of the Takagi number permitted distinction between the violation of the geometrical optics conditions and the failure of curve fitting procedures. The need for synchronized object/detector translation stages was removed by using a large, divergent beam and imaging the object in segments. Our image acquisition and reconstruction procedure enables quantitative phase retrieval for systems with a divergent source and accounts for imperfections in the analyser

  8. The Reconstruction Toolkit (RTK), an open-source cone-beam CT reconstruction toolkit based on the Insight Toolkit (ITK)

    International Nuclear Information System (INIS)

    Rit, S; Vila Oliva, M; Sarrut, D; Brousmiche, S; Labarbe, R; Sharp, G C

    2014-01-01

    We propose the Reconstruction Toolkit (RTK, http://www.openrtk.org), an open-source toolkit for fast cone-beam CT reconstruction, based on the Insight Toolkit (ITK) and using GPU code extracted from Plastimatch. RTK is developed by an open consortium (see affiliations) under the non-contaminating Apache 2.0 license. The quality of the platform is daily checked with regression tests in partnership with Kitware, the company supporting ITK. Several features are already available: Elekta, Varian and IBA inputs, multi-threaded Feldkamp-David-Kress reconstruction on CPU and GPU, Parker short scan weighting, multi-threaded CPU and GPU forward projectors, etc. Each feature is either accessible through command line tools or C++ classes that can be included in independent software. A MIDAS community has been opened to share CatPhan datasets of several vendors (Elekta, Varian and IBA). RTK will be used in the upcoming cone-beam CT scanner developed by IBA for proton therapy rooms. Many features are under development: new input format support, iterative reconstruction, hybrid Monte Carlo / deterministic CBCT simulation, etc. RTK has been built to freely share tomographic reconstruction developments between researchers and is open for new contributions.

  9. Model-based iterative reconstruction for reduction of radiation dose in abdominopelvic CT: comparison to adaptive statistical iterative reconstruction.

    Science.gov (United States)

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2013-12-01

    To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all pASIR and UL-ASIR (all pASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.

  10. Process Control Strategies for Dual-Phase Steel Manufacturing Using ANN and ANFIS

    Science.gov (United States)

    Vafaeenezhad, H.; Ghanei, S.; Seyedein, S. H.; Beygi, H.; Mazinani, M.

    2014-11-01

    In this research, a comprehensive soft computational approach is presented for the analysis of the influencing parameters on manufacturing of dual-phase steels. A set of experimental data have been gathered to obtain the initial database used for the training and testing of both artificial neural networks (ANN) and adaptive neuro-fuzzy inference system (ANFIS). The parameters used in the strategy were intercritical annealing temperature, carbon content, and holding time which gives off martensite percentage as an output. A fraction of the data set was chosen to train both ANN and ANFIS, and the rest was put into practice to authenticate the act of the trained networks while seeing unseen data. To compare the obtained results, coefficient of determination and root mean squared error indexes were chosen. Using artificial intelligence methods, it is not necessary to consider and establish a preliminary mathematical model and formulate its affecting parameters on its definition. In conclusion, the martensite percentages corresponding to the manufacturing parameters can be determined prior to a production using these controlling algorithms. Although the results acquired from both ANN and ANFIS are very encouraging, the proposed ANFIS has enhanced performance over the ANN and takes better effect on cost-reduction profit.

  11. Projection computation based on pixel in simultaneous algebraic reconstruction technique

    International Nuclear Information System (INIS)

    Wang Xu; Chen Zhiqiang; Xiong Hua; Zhang Li

    2005-01-01

    SART is an important arithmetic of image reconstruction, in which the projection computation takes over half of the reconstruction time. An efficient way to compute projection coefficient matrix together with memory optimization is presented in this paper. Different from normal method, projection lines are located based on every pixel, and the following projection coefficient computation can make use of the results. Correlation of projection lines and pixels can be used to optimize the computation. (authors)

  12. Edge Artifacts in Point Spread Function-based PET Reconstruction in Relation to Object Size and Reconstruction Parameters

    Directory of Open Access Journals (Sweden)

    Yuji Tsutsui

    2017-06-01

    Full Text Available Objective(s: We evaluated edge artifacts in relation to phantom diameter and reconstruction parameters in point spread function (PSF-based positron emission tomography (PET image reconstruction.Methods: PET data were acquired from an original cone-shaped phantom filled with 18F solution (21.9 kBq/mL for 10 min using a Biograph mCT scanner. The images were reconstructed using the baseline ordered subsets expectation maximization (OSEM algorithm and the OSEM with PSF correction model. The reconstruction parameters included a pixel size of 1.0, 2.0, or 3.0 mm, 1-12 iterations, 24 subsets, and a full width at half maximum (FWHM of the post-filter Gaussian filter of 1.0, 2.0, or 3.0 mm. We compared both the maximum recovery coefficient (RCmax and the mean recovery coefficient (RCmean in the phantom at different diameters.Results: The OSEM images had no edge artifacts, but the OSEM with PSF images had a dense edge delineating the hot phantom at diameters 10 mm or more and a dense spot at the center at diameters of 8 mm or less. The dense edge was clearly observed on images with a small pixel size, a Gaussian filter with a small FWHM, and a high number of iterations. At a phantom diameter of 6-7 mm, the RCmax for the OSEM and OSEM with PSF images was 60% and 140%, respectively (pixel size: 1.0 mm; FWHM of the Gaussian filter: 2.0 mm; iterations: 2. The RCmean of the OSEM with PSF images did not exceed 100%.Conclusion: PSF-based image reconstruction resulted in edge artifacts, the degree of which depends on the pixel size, number of iterations, FWHM of the Gaussian filter, and object size.

  13. Anne-Marie Sargueil: ilu on kasulik / intervjueerinud Emilie Toomela

    Index Scriptorium Estoniae

    Sargueil, Anne-Marie

    2015-01-01

    Prantsuse Disainiinstituudi juht Anne-Marie Sargueil rääkis prantsuse ja skandinaavia disainist, prantslaste disainieelistustest, uutest suundadest disaini valdkonnas, Eesti Tarbekunsti- ja Disainimuuseumis avatud näitusest "20 prantsuse disainiikooni"

  14. CT image reconstruction system based on hardware implementation

    International Nuclear Information System (INIS)

    Silva, Hamilton P. da; Evseev, Ivan; Schelin, Hugo R.; Paschuk, Sergei A.; Milhoretto, Edney; Setti, Joao A.P.; Zibetti, Marcelo; Hormaza, Joel M.; Lopes, Ricardo T.

    2009-01-01

    Full text: The timing factor is very important for medical imaging systems, which can nowadays be synchronized by vital human signals, like heartbeats or breath. The use of hardware implemented devices in such a system has advantages considering the high speed of information treatment combined with arbitrary low cost on the market. This article refers to a hardware system which is based on electronic programmable logic called FPGA, model Cyclone II from ALTERA Corporation. The hardware was implemented on the UP3 ALTERA Kit. A partially connected neural network with unitary weights was programmed. The system was tested with 60 topographic projections, 100 points in each, of the Shepp and Logan phantom created by MATLAB. The main restriction was found to be the memory size available on the device: the dynamic range of reconstructed image was limited to 0 65535. Also, the normalization factor must be observed in order to do not saturate the image during the reconstruction and filtering process. The test shows a principal possibility to build CT image reconstruction systems for any reasonable amount of input data by arranging the parallel work of the hardware units like we have tested. However, further studies are necessary for better understanding of the error propagation from topographic projections to reconstructed image within the implemented method. (author)

  15. Neural network modeling of energy use and greenhouse gas emissions of watermelon production systems

    Directory of Open Access Journals (Sweden)

    Ashkan Nabavi-Pelesaraei

    2016-01-01

    Full Text Available This study was conducted in order to determine energy consumption, model and analyze the input–output, energy efficiencies and GHG emissions for watermelon production using artificial neural networks (ANNs in the Guilan province of Iran, based on three different farm sizes. For this purpose, the initial data was collected from 120 watermelon producers in Langroud and Chaf region, two small cities in the Guilan province. The results indicated that total average energy input for watermelon production was 40228.98 MJ ha–1. Also, chemical fertilizers (with 76.49% were the highest energy inputs for watermelon production. Moreover, the share of non-renewable energy (with 96.24% was more than renewable energy (with 3.76% in watermelon production. The rate of energy use efficiency, energy productivity and net energy was calculated as 1.29, 0.68 kg MJ−1 and 11733.64 MJ ha−1, respectively. With respect to GHG analysis, the average of total GHG emissions was calculated about 1015 kgCO2eq. ha−1. The results illustrated that share of nitrogen (with 54.23% was the highest in GHG emissions for watermelon production, followed by diesel fuel (with 16.73% and electricity (with 15.45%. In this study, Levenberg–Marquardt learning Algorithm was used for training ANNs based on data collected from watermelon producers. The ANN model with 11–10–2 structure was the best one for predicting the watermelon yield and GHG emissions. In the best topology, the coefficient of determination (R2 was calculated as 0.969 and 0.995 for yield and GHG emissions of watermelon production, respectively. Furthermore, the results of sensitivity analysis revealed that the seed and human labor had the highest sensitivity in modeling of watermelon yield and GHG emissions, respectively.

  16. Regional MLEM reconstruction strategy for PET-based treatment verification in ion beam radiotherapy

    International Nuclear Information System (INIS)

    Gianoli, Chiara; Riboldi, Marco; Fattori, Giovanni; Baselli, Giuseppe; Baroni, Guido; Bauer, Julia; Debus, Jürgen; Parodi, Katia; De Bernardi, Elisabetta

    2014-01-01

    In ion beam radiotherapy, PET-based treatment verification provides a consistency check of the delivered treatment with respect to a simulation based on the treatment planning. In this work the region-based MLEM reconstruction algorithm is proposed as a new evaluation strategy in PET-based treatment verification. The comparative evaluation is based on reconstructed PET images in selected regions, which are automatically identified on the expected PET images according to homogeneity in activity values. The strategy was tested on numerical and physical phantoms, simulating mismatches between the planned and measured β + activity distributions. The region-based MLEM reconstruction was demonstrated to be robust against noise and the sensitivity of the strategy results were comparable to three voxel units, corresponding to 6 mm in numerical phantoms. The robustness of the region-based MLEM evaluation outperformed the voxel-based strategies. The potential of the proposed strategy was also retrospectively assessed on patient data and further clinical validation is envisioned. (paper)

  17. Application of ANN and fuzzy logic algorithms for streamflow ...

    Indian Academy of Sciences (India)

    1Department of Soil and Water Engineering, College of Technology and Engineering, Maharana Pratap. University of ... It was found that, ANN model performance improved with increasing .... algorithm uses supervised learning that provides.

  18. Spectrotemporal CT data acquisition and reconstruction at low dose

    International Nuclear Information System (INIS)

    Clark, Darin P.; Badea, Cristian T.; Lee, Chang-Lung; Kirsch, David G.

    2015-01-01

    problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. Results: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. Conclusions: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time

  19. Distributed MRI reconstruction using Gadgetron-based cloud computing.

    Science.gov (United States)

    Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S

    2015-03-01

    To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.

  20. A Spectral Reconstruction Algorithm of Miniature Spectrometer Based on Sparse Optimization and Dictionary Learning

    Science.gov (United States)

    Zhang, Shang; Fu, Hongyan; Huang, Shao-Lun; Zhang, Lin

    2018-01-01

    The miniaturization of spectrometer can broaden the application area of spectrometry, which has huge academic and industrial value. Among various miniaturization approaches, filter-based miniaturization is a promising implementation by utilizing broadband filters with distinct transmission functions. Mathematically, filter-based spectral reconstruction can be modeled as solving a system of linear equations. In this paper, we propose an algorithm of spectral reconstruction based on sparse optimization and dictionary learning. To verify the feasibility of the reconstruction algorithm, we design and implement a simple prototype of a filter-based miniature spectrometer. The experimental results demonstrate that sparse optimization is well applicable to spectral reconstruction whether the spectra are directly sparse or not. As for the non-directly sparse spectra, their sparsity can be enhanced by dictionary learning. In conclusion, the proposed approach has a bright application prospect in fabricating a practical miniature spectrometer. PMID:29470406

  1. Novel Fourier-based iterative reconstruction for sparse fan projection using alternating direction total variation minimization

    International Nuclear Information System (INIS)

    Jin Zhao; Zhang Han-Ming; Yan Bin; Li Lei; Wang Lin-Yuan; Cai Ai-Long

    2016-01-01

    Sparse-view x-ray computed tomography (CT) imaging is an interesting topic in CT field and can efficiently decrease radiation dose. Compared with spatial reconstruction, a Fourier-based algorithm has advantages in reconstruction speed and memory usage. A novel Fourier-based iterative reconstruction technique that utilizes non-uniform fast Fourier transform (NUFFT) is presented in this work along with advanced total variation (TV) regularization for a fan sparse-view CT. The proposition of a selective matrix contributes to improve reconstruction quality. The new method employs the NUFFT and its adjoin to iterate back and forth between the Fourier and image space. The performance of the proposed algorithm is demonstrated through a series of digital simulations and experimental phantom studies. Results of the proposed algorithm are compared with those of existing TV-regularized techniques based on compressed sensing method, as well as basic algebraic reconstruction technique. Compared with the existing TV-regularized techniques, the proposed Fourier-based technique significantly improves convergence rate and reduces memory allocation, respectively. (paper)

  2. Early Cretaceous terrestrial ecosystems in East Asia based on food-web and energy-flow models

    Science.gov (United States)

    Matsukawa, M.; Saiki, K.; Ito, M.; Obata, I.; Nichols, D.J.; Lockley, M.G.; Kukihara, R.; Shibata, K.

    2006-01-01

    In recent years, there has been global interest in the environments and ecosystems around the world. It is helpful to reconstruct past environments and ecosystems to help understand them in the present and the future. The present environments and ecosystems are an evolving continuum with those of the past and the future. This paper demonstrates the contribution of geology and paleontology to such continua. Using fossils, we can make an estimation of past population density as an ecosystem index based on food-web and energy-flow models. Late Mesozoic nonmarine deposits are distributed widely on the eastern Asian continent and contain various kinds of fossils such as fishes, amphibians, reptiles, dinosaurs, mammals, bivalves, gastropods, insects, ostracodes, conchostracans, terrestrial plants, and others. These fossil organisms are useful for late Mesozoic terrestrial ecosystem reconstruction using food-web and energy-flow models. We chose Early Cretaceous fluvio-lacustrine basins in the Choyr area, southeastern Mongolia, and the Tetori area, Japan, for these analyses and as a potential model for reconstruction of other similar basins in East Asia. The food-web models are restored based on taxa that occurred in these basins. They form four or five trophic levels in an energy pyramid consisting of rich primary producers at its base and smaller biotas higher in the food web. This is the general energy pyramid of a typical ecosystem. Concerning the population densities of vertebrate taxa in 1 km2 in these basins, some differences are recognized between Early Cretaceous and the present. For example, Cretaceous estimates suggest 2.3 to 4.8 times as many herbivores and 26.0 to 105.5 times the carnivore population. These differences are useful for the evaluation of past population densities of vertebrate taxa. Such differences may also be caused by the different metabolism of different taxa. Preservation may also be a factor, and we recognize that various problems occur in

  3. ANN Surface Roughness Optimization of AZ61 Magnesium Alloy Finish Turning: Minimum Machining Times at Prime Machining Costs.

    Science.gov (United States)

    Abbas, Adel Taha; Pimenov, Danil Yurievich; Erdakov, Ivan Nikolaevich; Taha, Mohamed Adel; Soliman, Mahmoud Sayed; El Rayes, Magdy Mostafa

    2018-05-16

    Magnesium alloys are widely used in aerospace vehicles and modern cars, due to their rapid machinability at high cutting speeds. A novel Edgeworth⁻Pareto optimization of an artificial neural network (ANN) is presented in this paper for surface roughness ( Ra ) prediction of one component in computer numerical control (CNC) turning over minimal machining time ( T m ) and at prime machining costs ( C ). An ANN is built in the Matlab programming environment, based on a 4-12-3 multi-layer perceptron (MLP), to predict Ra , T m , and C , in relation to cutting speed, v c , depth of cut, a p , and feed per revolution, f r . For the first time, a profile of an AZ61 alloy workpiece after finish turning is constructed using an ANN for the range of experimental values v c , a p , and f r . The global minimum length of a three-dimensional estimation vector was defined with the following coordinates: Ra = 0.087 μm, T m = 0.358 min/cm³, C = $8.2973. Likewise, the corresponding finish-turning parameters were also estimated: cutting speed v c = 250 m/min, cutting depth a p = 1.0 mm, and feed per revolution f r = 0.08 mm/rev. The ANN model achieved a reliable prediction accuracy of ±1.35% for surface roughness.

  4. Anterior Cranial Base Reconstruction with a Reverse Temporalis Muscle Flap and Calvarial Bone Graft

    Directory of Open Access Journals (Sweden)

    Seung Gee Kwon

    2012-07-01

    Full Text Available BackgroundCranial base defects are challenging to reconstruct without serious complications. Although free tissue transfer has been used widely and efficiently, it still has the limitation of requiring a long operation time along with the burden of microanastomosis and donor site morbidity. We propose using a reverse temporalis muscle flap and calvarial bone graft as an alternative option to a free flap for anterior cranial base reconstruction.MethodsBetween April 2009 and February 2012, cranial base reconstructions using an autologous calvarial split bone graft combined with a reverse temporalis muscle flap were performed in five patients. Medical records were retrospectively analyzed and postoperative computed tomography scans, magnetic resonance imaging, and angiography findings were examined to evaluate graft survival and flap viability.ResultsThe mean follow-up period was 11.8 months and the mean operation time for reconstruction was 8.4±3.36 hours. The defects involved the anterior cranial base, including the orbital roof and the frontal and ethmoidal sinus. All reconstructions were successful. Viable flap vascularity and bone survival were observed. There were no serious complications except for acceptable donor site depressions, which were easily corrected with minor procedures.ConclusionsThe reverse temporalis muscle flap could provide sufficient bulkiness to fill dead space and sufficient vascularity to endure infection. The calvarial bone graft provides a rigid framework, which is critical for maintaining the cranial base structure. Combined anterior cranial base reconstruction with a reverse temporalis muscle flap and calvarial bone graft could be a viable alternative to free tissue transfer.

  5. Kui vana on kunstnik? / Anneli Porri

    Index Scriptorium Estoniae

    Porri, Anneli, 1980-

    2003-01-01

    Rahvusvahelise kunstihariduse konverentsi "InSea on Sea" raames Kunstiakadeemia galeriis Karin Laansoo kureeritud Tallinna Kunstikooli õpilaste tööde näitus "MÄRKmed", Draakoni galeriis Mari Sobolevi kureeritud Viljandi Maagümnaasiumi kunstistuudio näitus "Sisseastumiseksam maailma", rahvusraamatukogus Anneli Porri kureeritud näitus "Kokkuvõte" EKA tänavuste lõpetajate töödest ja näitus "Leitud tagahoovist", Kullo galeriis rahvusvaheline näitus "Dialoog erinevuste vahel"

  6. Exact estimation of biodiesel cetane number (CN) from its fatty acid methyl esters (FAMEs) profile using partial least square (PLS) adapted by artificial neural network (ANN)

    International Nuclear Information System (INIS)

    Hosseinpour, Soleiman; Aghbashlo, Mortaza; Tabatabaei, Meisam; Khalife, Esmail

    2016-01-01

    Highlights: • Estimating the biodiesel CN from its FAMEs profile using ANN-based PLS approach. • Comparing the capability of ANN-adapted PLS approach with the standard PLS model. • Exact prediction of biodiesel CN from it FAMEs profile using ANN-based PLS method. • Developing an easy-to-use software using ANN-PLS model for computing the biodiesel CN. - Abstract: Cetane number (CN) is among the most important properties of biodiesel because it quantifies combustion speed or in better words, ignition quality. Experimental measurement of biodiesel CN is rather laborious and expensive. However, the high proportionality of biodiesel fatty acid methyl esters (FAMEs) profile with its CN is very appealing to develop straightforward and inexpensive computerized tools for biodiesel CN estimation. Unfortunately, correlating the chemical structure of biodiesel to its CN using conventional statistical and mathematical approaches is very difficult. To solve this issue, partial least square (PLS) adapted by artificial neural network (ANN) was introduced and examined herein as an innovative approach for the exact estimation of biodiesel CN from its FAMEs profile. In the proposed approach, ANN paradigm was used for modeling the inner relation between the input and the output PLS score vectors. In addition, the capability of the developed method in predicting the biodiesel CN was compared with the basal PLS method. The accuracy of the developed approaches for computing the biodiesel CN was assessed using three statistical criteria, i.e., coefficient of determination (R"2), mean-squared error (MSE), and percentage error (PE). The ANN-adapted PLS method predicted the biodiesel CN with an R"2 value higher than 0.99 demonstrating the fidelity of the developed model over the classical PLS method with a markedly lower R"2 value of about 0.85. In order to facilitate the use of the proposed model, an easy-to-use computer program was also developed on the basis of ANN-adapted PLS

  7. Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction

    Science.gov (United States)

    Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.

    2017-09-01

    Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.

  8. Breast reconstruction with anatomical implants: A review of indications and techniques based on current literature.

    Science.gov (United States)

    Gardani, Marco; Bertozzi, Nicolò; Grieco, Michele Pio; Pesce, Marianna; Simonacci, Francesco; Santi, PierLuigi; Raposio, Edoardo

    2017-09-01

    One important modality of breast cancer therapy is surgical treatment, which has become increasingly less mutilating over the last century. Breast reconstruction has become an integrated part of breast cancer treatment due to long-term psychosexual health factors and its importance for breast cancer survivors. Both autogenous tissue-based and implant-based reconstruction provides satisfactory reconstructive options due to better surgeon awareness of "the ideal breast size", although each has its own advantages and disadvantages. An overview of the current options in breast reconstruction is presented in this article.

  9. Development of an ANN optimized mucoadhesive buccal tablet containing flurbiprofen and lidocaine for dental pain.

    Science.gov (United States)

    Hussain, Amjad; Syed, Muhammad Ali; Abbas, Nasir; Hanif, Sana; Arshad, Muhammad Sohail; Bukhari, Nadeem Irfan; Hussain, Khalid; Akhlaq, Muhammad; Ahmad, Zeeshan

    2016-06-01

    A novel mucoadhesive buccal tablet containing flurbiprofen (FLB) and lidocaine HCl (LID) was prepared to relieve dental pain. Tablet formulations (F1-F9) were prepared using variable quantities of mucoadhesive agents, hydroxypropyl methyl cellulose (HPMC) and sodium alginate (SA). The formulations were evaluated for their physicochemical properties, mucoadhesive strength and mucoadhesion time, swellability index and in vitro release of active agents. Release of both drugs depended on the relative ratio of HPMC:SA. However, mucoadhesive strength and mucoadhesion time were better in formulations, containing higher proportions of HPMC compared to SA. An artificial neural network (ANN) approach was applied to optimise formulations based on known effective parameters (i.e., mucoadhesive strength, mucoadhesion time and drug release), which proved valuable. This study indicates that an effective buccal tablet formulation of flurbiprofen and lidocaine can be prepared via an optimized ANN approach.

  10. GPU based Monte Carlo for PET image reconstruction: parameter optimization

    International Nuclear Information System (INIS)

    Cserkaszky, Á; Légrády, D.; Wirth, A.; Bükki, T.; Patay, G.

    2011-01-01

    This paper presents the optimization of a fully Monte Carlo (MC) based iterative image reconstruction of Positron Emission Tomography (PET) measurements. With our MC re- construction method all the physical effects in a PET system are taken into account thus superior image quality is achieved in exchange for increased computational effort. The method is feasible because we utilize the enormous processing power of Graphical Processing Units (GPUs) to solve the inherently parallel problem of photon transport. The MC approach regards the simulated positron decays as samples in mathematical sums required in the iterative reconstruction algorithm, so to complement the fast architecture, our work of optimization focuses on the number of simulated positron decays required to obtain sufficient image quality. We have achieved significant results in determining the optimal number of samples for arbitrary measurement data, this allows to achieve the best image quality with the least possible computational effort. Based on this research recommendations can be given for effective partitioning of computational effort into the iterations in limited time reconstructions. (author)

  11. Optimization of thermal conductivity lightweight brick type AAC (Autoclaved Aerated Concrete) effect of Si & Ca composition by using Artificial Neural Network (ANN)

    Science.gov (United States)

    Zulkifli; Wiryawan, G. P.

    2018-03-01

    Lightweight brick is the most important component of building construction, therefore it is necessary to have lightweight thermal, mechanical and aqustic thermal properties that meet the standard, in this paper which is discussed is the domain of light brick thermal conductivity properties. The advantage of lightweight brick has a low density (500-650 kg/m3), more economical, can reduce the load 30-40% compared to conventional brick (clay brick). In this research, Artificial Neural Network (ANN) is used to predict the thermal conductivity of lightweight brick type Autoclaved Aerated Concrete (AAC). Based on the training and evaluation that have been done on 10 model of ANN with number of hidden node 1 to 10, obtained that ANN with 3 hidden node have the best performance. It is known from the mean value of MSE (Mean Square Error) validation for three training times of 0.003269. This ANN was further used to predict the thermal conductivity of four light brick samples. The predicted results for each of the AAC1, AAC2, AAC3 and AAC4 light brick samples were 0.243 W/m.K, respectively; 0.29 W/m.K; 0.32 W/m.K; and 0.32 W/m.K. Furthermore, ANN is used to determine the effect of silicon composition (Si), Calcium (Ca), to light brick thermal conductivity. ANN simulation results show that the thermal conductivity increases with increasing Si composition. Si content is allowed maximum of 26.57%, while the Ca content in the range 20.32% - 30.35%.

  12. Reconstruction of a digital core containing clay minerals based on a clustering algorithm.

    Science.gov (United States)

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K-means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  13. Reconstruction of a digital core containing clay minerals based on a clustering algorithm

    Science.gov (United States)

    He, Yanlong; Pu, Chunsheng; Jing, Cheng; Gu, Xiaoyu; Chen, Qingdong; Liu, Hongzhi; Khan, Nasir; Dong, Qiaoling

    2017-10-01

    It is difficult to obtain a core sample and information for digital core reconstruction of mature sandstone reservoirs around the world, especially for an unconsolidated sandstone reservoir. Meanwhile, reconstruction and division of clay minerals play a vital role in the reconstruction of the digital cores, although the two-dimensional data-based reconstruction methods are specifically applicable as the microstructure reservoir simulation methods for the sandstone reservoir. However, reconstruction of clay minerals is still challenging from a research viewpoint for the better reconstruction of various clay minerals in the digital cores. In the present work, the content of clay minerals was considered on the basis of two-dimensional information about the reservoir. After application of the hybrid method, and compared with the model reconstructed by the process-based method, the digital core containing clay clusters without the labels of the clusters' number, size, and texture were the output. The statistics and geometry of the reconstruction model were similar to the reference model. In addition, the Hoshen-Kopelman algorithm was used to label various connected unclassified clay clusters in the initial model and then the number and size of clay clusters were recorded. At the same time, the K -means clustering algorithm was applied to divide the labeled, large connecting clusters into smaller clusters on the basis of difference in the clusters' characteristics. According to the clay minerals' characteristics, such as types, textures, and distributions, the digital core containing clay minerals was reconstructed by means of the clustering algorithm and the clay clusters' structure judgment. The distributions and textures of the clay minerals of the digital core were reasonable. The clustering algorithm improved the digital core reconstruction and provided an alternative method for the simulation of different clay minerals in the digital cores.

  14. A Comparison of SWAT and ANN Models for Daily Runoff Simulation in Different Climatic Zones of Peninsular Spain

    Directory of Open Access Journals (Sweden)

    Patricia Jimeno-Sáez

    2018-02-01

    Full Text Available Streamflow data are of prime importance to water-resources planning and management, and the accuracy of their estimation is very important for decision making. The Soil and Water Assessment Tool (SWAT and Artificial Neural Network (ANN models have been evaluated and compared to find a method to improve streamflow estimation. For a more complete evaluation, the accuracy and ability of these streamflow estimation models was also established separately based on their performance during different periods of flows using regional flow duration curves (FDCs. Specifically, the FDCs were divided into five sectors: very low, low, medium, high and very high flow. This segmentation of flow allows analysis of the model performance for every important discharge event precisely. In this study, the models were applied in two catchments in Peninsular Spain with contrasting climatic conditions: Atlantic and Mediterranean climates. The results indicate that SWAT and ANNs were generally good tools in daily streamflow modelling. However, SWAT was found to be more successful in relation to better simulation of lower flows, while ANNs were superior at estimating higher flows in all cases.

  15. Steni muinasjutuvõistluse võitjad selgunud / Ants Roos, Ann Roos

    Index Scriptorium Estoniae

    Roos, Ants

    2008-01-01

    Steni XVI muinasjutuvõistluse žüriisse kuulusid: Ann Roos, Ants Roos, Leelo Tungal, Krista Kumberg, Leida Olszak, Ülle Väljataga. Tulemused: I koht Siim Niinelaid, II koht Julius Air Kull, III koht Mihkel Rammu. Žürii eriauhinnad: Anna Kristin Peterson, Elis Ruus, Rain Hallikas, Mariliis Peterson, Marjaliisa Palu, Karl Kirsimäe, Margaret Pulk. Ergutusauhinnad: Karmel Klaus, Martti Kaljuste, Kristina Korell, Mirjam Võsaste, Mihkel Põder, Iirys Kalde, Miriam Jamul, Mari-Ann Mägi, Ketlin Saar, Liisbeth Kirss. Muud eriauhinnad said: Allan Läll, Berle Mees, Anett Kuuse, Karl Erik Kübarsepp, Grete Tamm, Siim Niinelaid, Kaisa Marie Sipelgas, Ellen Anett Põldmaa, Evelin Laul, Karl Laas, Karl Stamm, Kerli Retter

  16. A Stochastic Geometry Method for Pylon Reconstruction from Airborne LiDAR Data

    Directory of Open Access Journals (Sweden)

    Bo Guo

    2016-03-01

    Full Text Available Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of self-supporting pylons widely used in high voltage power-line systems from airborne LiDAR data. Our work constructs pylons from a library of 3D parametric models, which are represented using polyhedrons based on stochastic geometry. Firstly, laser points of pylons are extracted from the dataset using an automatic classification method. An energy function made up of two terms is then defined: the first term measures the adequacy of the objects with respect to the data, and the second term has the ability to favor or penalize certain configurations based on prior knowledge. Finally, estimation is undertaken by minimizing the energy using simulated annealing. We use a Markov Chain Monte Carlo sampler, leading to an optimal configuration of objects. Two main contributions of this paper are: (1 building a framework for automatic pylon reconstruction; and (2 efficient global optimization. The pylons can be precisely reconstructed through energy optimization. Experiments producing convincing results validated the proposed method using a dataset of complex structure.

  17. Quartet-net: a quartet-based method to reconstruct phylogenetic networks.

    Science.gov (United States)

    Yang, Jialiang; Grünewald, Stefan; Wan, Xiu-Feng

    2013-05-01

    Phylogenetic networks can model reticulate evolutionary events such as hybridization, recombination, and horizontal gene transfer. However, reconstructing such networks is not trivial. Popular character-based methods are computationally inefficient, whereas distance-based methods cannot guarantee reconstruction accuracy because pairwise genetic distances only reflect partial information about a reticulate phylogeny. To balance accuracy and computational efficiency, here we introduce a quartet-based method to construct a phylogenetic network from a multiple sequence alignment. Unlike distances that only reflect the relationship between a pair of taxa, quartets contain information on the relationships among four taxa; these quartets provide adequate capacity to infer a more accurate phylogenetic network. In applications to simulated and biological data sets, we demonstrate that this novel method is robust and effective in reconstructing reticulate evolutionary events and it has the potential to infer more accurate phylogenetic distances than other conventional phylogenetic network construction methods such as Neighbor-Joining, Neighbor-Net, and Split Decomposition. This method can be used in constructing phylogenetic networks from simple evolutionary events involving a few reticulate events to complex evolutionary histories involving a large number of reticulate events. A software called "Quartet-Net" is implemented and available at http://sysbio.cvm.msstate.edu/QuartetNet/.

  18. Fast neural-net based fake track rejection in the LHCb reconstruction

    CERN Document Server

    De Cian, Michel; Seyfert, Paul; Stahl, Sascha

    2017-01-01

    A neural-network based algorithm to identify fake tracks in the LHCb pattern recognition is presented. This algorithm, called ghost probability, retains more than 99 % of well reconstructed tracks while reducing the number of fake tracks by 60 %. It is fast enough to fit into the CPU time budget of the software trigger farm and thus reduces the combinatorics of the decay reconstructions, as well as the number of tracks that need to be processed by the particle identification algorithms. As a result, it strongly contributes to the achievement of having the same reconstruction online and offline in the LHCb experiment in Run II of the LHC.

  19. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    International Nuclear Information System (INIS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-01-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

  20. A fast image reconstruction technique based on ART

    International Nuclear Information System (INIS)

    Zhang Shunli; Zhang Dinghua; Wang Kai; Huang Kuidong; Li Weibin

    2007-01-01

    Algebraic Reconstruction Technique (ART) is an iterative method for image reconstruction. Improving its reconstruction speed has been one of the important researching aspects of ART. For the simplified weight coefficients reconstruction model of ART, a fast grid traverse algorithm is proposed, which can determine the grid index by simple operations such as addition, subtraction and comparison. Since the weight coefficients are calculated at real time during iteration, large amount of storage is saved and the reconstruction speed is greatly increased. Experimental results show that the new algorithm is very effective and the reconstruction speed is improved about 10 times compared with the traditional algorithm. (authors)

  1. Comparing and improving reconstruction methods for proxies based on compositional data

    Science.gov (United States)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  2. Artificial Neural Networks for SCADA Data based Load Reconstruction (poster)

    NARCIS (Netherlands)

    Hofemann, C.; Van Bussel, G.J.W.; Veldkamp, H.

    2011-01-01

    If at least one reference wind turbine is available, which provides sufficient information about the wind turbine loads, the loads acting on the neighbouring wind turbines can be predicted via an artificial neural network (ANN). This research explores the possibilities to apply such a network not

  3. Application of Artificial Neural Network and Support Vector Machines in Predicting Metabolizable Energy in Compound Feeds for Pigs.

    Science.gov (United States)

    Ahmadi, Hamed; Rodehutscord, Markus

    2017-01-01

    In the nutrition literature, there are several reports on the use of artificial neural network (ANN) and multiple linear regression (MLR) approaches for predicting feed composition and nutritive value, while the use of support vector machines (SVM) method as a new alternative approach to MLR and ANN models is still not fully investigated. The MLR, ANN, and SVM models were developed to predict metabolizable energy (ME) content of compound feeds for pigs based on the German energy evaluation system from analyzed contents of crude protein (CP), ether extract (EE), crude fiber (CF), and starch. A total of 290 datasets from standardized digestibility studies with compound feeds was provided from several institutions and published papers, and ME was calculated thereon. Accuracy and precision of developed models were evaluated, given their produced prediction values. The results revealed that the developed ANN [ R 2  = 0.95; root mean square error (RMSE) = 0.19 MJ/kg of dry matter] and SVM ( R 2  = 0.95; RMSE = 0.21 MJ/kg of dry matter) models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR ( R 2  = 0.89; RMSE = 0.27 MJ/kg of dry matter). The developed ANN and SVM models produced better prediction values in estimating ME in compound feed than those produced by conventional MLR; however, there were not obvious differences between performance of ANN and SVM models. Thus, SVM model may also be considered as a promising tool for modeling the relationship between chemical composition and ME of compound feeds for pigs. To provide the readers and nutritionist with the easy and rapid tool, an Excel ® calculator, namely, SVM_ME_pig, was created to predict the metabolizable energy values in compound feeds for pigs using developed support vector machine model.

  4. Luksuslik ruum kõigile : vestlus arhitekt Anne Lacatoniga = Luxury space for everyone : interview with Anne Lacaton / intervjueerinud Katrin Paadam

    Index Scriptorium Estoniae

    Lacaton, Anne, 1955-

    2013-01-01

    Prantsuse arhitekt Anne Lacaton oma büroo (Lacaton & Vassal, Frédéric Druot, 2011) linnaehituslikult uuendusliku projekti järgi ümber ehitatud 1960. aastate korterelamust Tour du Bois-le-Prêtre Pariisis, sotsiaalelamute ehitusest Prantsusmaal, 1960.-1970. aastatel ehitatud elamute taaskasutusega seotud probleemidest, rekonstrueerimise kasulikkusest võrreldes elamu lammutamise ja uuesti ehitamisega, linnas elavatele inimestele paremate elamistingimuste loomisest, linnaplaneerimisest

  5. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  6. Energy Optimization Using a Case-Based Reasoning Strategy.

    Science.gov (United States)

    González-Briones, Alfonso; Prieto, Javier; De La Prieta, Fernando; Herrera-Viedma, Enrique; Corchado, Juan M

    2018-03-15

    At present, the domotization of homes and public buildings is becoming increasingly popular. Domotization is most commonly applied to the field of energy management, since it gives the possibility of managing the consumption of the devices connected to the electric network, the way in which the users interact with these devices, as well as other external factors that influence consumption. In buildings, Heating, Ventilation and Air Conditioning (HVAC) systems have the highest consumption rates. The systems proposed so far have not succeeded in optimizing the energy consumption associated with a HVAC system because they do not monitor all the variables involved in electricity consumption. For this reason, this article presents an agent approach that benefits from the advantages provided by a Multi-Agent architecture (MAS) deployed in a Cloud environment with a wireless sensor network (WSN) in order to achieve energy savings. The agents of the MAS learn social behavior thanks to the collection of data and the use of an artificial neural network (ANN). The proposed system has been assessed in an office building achieving an average energy savings of 41% in the experimental group offices.

  7. Energy Optimization Using a Case-Based Reasoning Strategy

    Science.gov (United States)

    Herrera-Viedma, Enrique

    2018-01-01

    At present, the domotization of homes and public buildings is becoming increasingly popular. Domotization is most commonly applied to the field of energy management, since it gives the possibility of managing the consumption of the devices connected to the electric network, the way in which the users interact with these devices, as well as other external factors that influence consumption. In buildings, Heating, Ventilation and Air Conditioning (HVAC) systems have the highest consumption rates. The systems proposed so far have not succeeded in optimizing the energy consumption associated with a HVAC system because they do not monitor all the variables involved in electricity consumption. For this reason, this article presents an agent approach that benefits from the advantages provided by a Multi-Agent architecture (MAS) deployed in a Cloud environment with a wireless sensor network (WSN) in order to achieve energy savings. The agents of the MAS learn social behavior thanks to the collection of data and the use of an artificial neural network (ANN). The proposed system has been assessed in an office building achieving an average energy savings of 41% in the experimental group offices. PMID:29543729

  8. Energy Optimization Using a Case-Based Reasoning Strategy

    Directory of Open Access Journals (Sweden)

    Alfonso González-Briones

    2018-03-01

    Full Text Available At present, the domotization of homes and public buildings is becoming increasingly popular. Domotization is most commonly applied to the field of energy management, since it gives the possibility of managing the consumption of the devices connected to the electric network, the way in which the users interact with these devices, as well as other external factors that influence consumption. In buildings, Heating, Ventilation and Air Conditioning (HVAC systems have the highest consumption rates. The systems proposed so far have not succeeded in optimizing the energy consumption associated with a HVAC system because they do not monitor all the variables involved in electricity consumption. For this reason, this article presents an agent approach that benefits from the advantages provided by a Multi-Agent architecture (MAS deployed in a Cloud environment with a wireless sensor network (WSN in order to achieve energy savings. The agents of the MAS learn social behavior thanks to the collection of data and the use of an artificial neural network (ANN. The proposed system has been assessed in an office building achieving an average energy savings of 41% in the experimental group offices.

  9. PET image reconstruction with rotationally symmetric polygonal pixel grid based highly compressible system matrix

    International Nuclear Information System (INIS)

    Yu Yunhan; Xia Yan; Liu Yaqiang; Wang Shi; Ma Tianyu; Chen Jing; Hong Baoyu

    2013-01-01

    To achieve a maximum compression of system matrix in positron emission tomography (PET) image reconstruction, we proposed a polygonal image pixel division strategy in accordance with rotationally symmetric PET geometry. Geometrical definition and indexing rule for polygonal pixels were established. Image conversion from polygonal pixel structure to conventional rectangular pixel structure was implemented using a conversion matrix. A set of test images were analytically defined in polygonal pixel structure, converted to conventional rectangular pixel based images, and correctly displayed which verified the correctness of the image definition, conversion description and conversion of polygonal pixel structure. A compressed system matrix for PET image recon was generated by tap model and tested by forward-projecting three different distributions of radioactive sources to the sinogram domain and comparing them with theoretical predictions. On a practical small animal PET scanner, a compress ratio of 12.6:1 of the system matrix size was achieved with the polygonal pixel structure, comparing with the conventional rectangular pixel based tap-mode one. OS-EM iterative image reconstruction algorithms with the polygonal and conventional Cartesian pixel grid were developed. A hot rod phantom was detected and reconstructed based on these two grids with reasonable time cost. Image resolution of reconstructed images was both 1.35 mm. We conclude that it is feasible to reconstruct and display images in a polygonal image pixel structure based on a compressed system matrix in PET image reconstruction. (authors)

  10. Ants Orasest ja Anne Lange monograafiast / Jüri Talvet

    Index Scriptorium Estoniae

    Talvet, Jüri, 1945-

    2005-01-01

    Arvustus: Oras, Ants. Luulekool. I, Apoloogia / koostajad Hando Runnel ja Jaak Rähesoo. Tartu : Ilmamaa, 2003 ; Oras, Ants. Luulekool II, Meistriklass. Tartu : Ilmamaa, 2004 ; Lange, Anne. Ants Oras : [kirjandusteadlane, -kriitik ja tõlkija (1900-1982)]. Tartu : Ilmamaa, 2004

  11. Homotopy Based Reconstruction from Acoustic Images

    DEFF Research Database (Denmark)

    Sharma, Ojaswa

    of the inherent arrangement. The problem of reconstruction from arbitrary cross sections is a generic problem and is also shown to be solved here using the mathematical tool of continuous deformations. As part of a complete processing, segmentation using level set methods is explored for acoustic images and fast...... GPU (Graphics Processing Unit) based methods are suggested for a streaming computation on large volumes of data. Validation of results for acoustic images is not straightforward due to unavailability of ground truth. Accuracy figures for the suggested methods are provided using phantom object...

  12. Denoising multicriterion iterative reconstruction in emission spectral tomography

    Science.gov (United States)

    Wan, Xiong; Yin, Aihan

    2007-03-01

    In the study of optical testing, the computed tomogaphy technique has been widely adopted to reconstruct three-dimensional distributions of physical parameters of various kinds of fluid fields, such as flame, plasma, etc. In most cases, projection data are often stained by noise due to environmental disturbance, instrumental inaccuracy, and other random interruptions. To improve the reconstruction performance in noisy cases, an algorithm that combines a self-adaptive prefiltering denoising approach (SPDA) with a multicriterion iterative reconstruction (MCIR) is proposed and studied. First, the level of noise is approximately estimated with a frequency domain statistical method. Then the cutoff frequency of a Butterworth low-pass filter was established based on the evaluated noise energy. After the SPDA processing, the MCIR algorithm was adopted for limited-view optical computed tomography reconstruction. Simulated reconstruction of two test phantoms and a flame emission spectral tomography experiment were employed to evaluate the performance of SPDA-MCIR in noisy cases. Comparison with some traditional methods and experiment results showed that the SPDA-MCIR combination had obvious improvement in the case of noisy data reconstructions.

  13. Neutron Fluence and Energy Reconstruction with the LNE-IRSN/MIMAC Recoil Detector MicroTPC at 27 keV

    Energy Technology Data Exchange (ETDEWEB)

    Maire, D.; Lebreton, L.; Querre, Ph. [Institute for Radioprotection and Nuclear Safety - IRSN, site of Cadarache, 13115 Saint Paul lez Durance (France); Bosson, G.; Guillaudin, O.; Muraz, J.F.; Riffard, Q.; Santos, D. [Laboratoire de Physique Subatomique et de Cosmologie - LPSCCNRSIN2P3/ UJF/INP, 38000 Grenoble (France)

    2015-07-01

    The French Institute for Radiation protection and Nuclear Safety (IRSN), designated by the French Metrology Institute (LNE) for neutron metrology, is developing a time projection chamber using a Micromegas anode: microTPC. This work is carried out in collaboration with the Laboratory of Subatomic Physics and Cosmology (LPSC). The aim is to characterize the energy distribution of neutron fluence in the energy range 8 keV - 5 MeV with a primary procedure. The time projection chambers are gaseous detectors able to measure charged particles energy and to reconstruct their track if a pixelated anode is used. In our case, the gas is used as a (n, p) converter in order to detect neutrons down to few keV. Coming from elastic collisions with neutrons, recoil protons lose a part of their kinetic energy by ionizing the gas. The ionization electrons are drifted toward a pixelated anode (2D projection), read at 50 MHz by a self-triggered electronic system to obtain the third track dimension. The neutron energy is reconstructed event by event thanks to proton scattering angle and proton energy measurements. The scattering angle is deduced from the 3D track. The proton energy is obtained by charge collection measurements, knowing the ionization quenching factor (i.e. the part of proton kinetic energy lost by ionizing the gas). The fluence is calculated thanks to the detected events number and the simulation of the detector response. The μTPC is a new reliable detector able to measure energy distribution of the neutron fluence without unfolding procedure or prior neutron calibration contrary to usual gaseous counters. The microTPC is still being developed and measurements have been carried out at the AMANDE facility, with neutrons energies going from 8 keV to 565 keV. After the context and the μ-TPC working principle presentation, measurements of the neutron energy and fluence at 27 keV and 144 keV are shown and compared to the complete detector response simulation. This work

  14. Large R jet reconstruction and calibration at 13 TeV with the ATLAS detector

    CERN Document Server

    Taenzer, Joe; The ATLAS collaboration

    2017-01-01

    Large-R jets are used by many ATLAS analyses working in boosted regimes. ATLAS Large-R jets are reconstructed from locally callibrated calorimeter topoclusters with the Anti-k_{t} algorithm with radius parameter R=1.0, and then groomed to remove pile-up with the trimming algorithm with f_{cut} 0.05 and subjet radius R=0.2. Monte Carlo based energy and mass calibrations correct the reconstructed jet energy and mass to truth, followed by in-situ calibrations using a number of different techniques. Large-R jets can also be reconstructed using small-R jets as constituents, instead of topoclusters, a technique called jet reclustering, or from track calo clusters (TCCs), which are constituents constructed using both tracking and calorimeter information. An overview of large-R jet reconstruction will be presented here, along with selected results from the jet mass calibrations, both Monte Carlo based an insitu, from jet reclustering, and from track calo clusters.

  15. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago (Japan); Sakamoto, Makoto; Watanabe, Takashi [Tottori University, Division of Neurosurgery, Department of Brain and Neurosciences, Faculty of Medicine, Yonago (Japan); Iwata, Naoki; Kishimoto, Junichi [Tottori University, Division of Clinical Radiology Faculty of Medicine, Yonago (Japan); Kaminou, Toshio [Osaka Minami Medical Center, Department of Radiology, Osaka (Japan)

    2014-11-15

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  16. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide; Sakamoto, Makoto; Watanabe, Takashi; Iwata, Naoki; Kishimoto, Junichi; Kaminou, Toshio

    2014-01-01

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  17. Estimation of Optimum Dilution in the GMAW Process Using Integrated ANN-GA

    Directory of Open Access Journals (Sweden)

    P. Sreeraj

    2013-01-01

    Full Text Available To improve the corrosion resistant properties of carbon steel, usually cladding process is used. It is a process of depositing a thick layer of corrosion resistant material over carbon steel plate. Most of the engineering applications require high strength and corrosion resistant materials for long-term reliability and performance. By cladding these properties can be achieved with minimum cost. The main problem faced on cladding is the selection of optimum combinations of process parameters for achieving quality clad and hence good clad bead geometry. This paper highlights an experimental study to optimize various input process parameters (welding current, welding speed, gun angle, and contact tip to work distance and pinch to get optimum dilution in stainless steel cladding of low carbon structural steel plates using gas metal arc welding (GMAW. Experiments were conducted based on central composite rotatable design with full replication technique, and mathematical models were developed using multiple regression method. The developed models have been checked for adequacy and significance. In this study, artificial neural network (ANN and genetic algorithm (GA techniques were integrated and labeled as integrated ANN-GA to estimate optimal process parameters in GMAW to get optimum dilution.

  18. Image Reconstruction Based on Homotopy Perturbation Inversion Method for Electrical Impedance Tomography

    Directory of Open Access Journals (Sweden)

    Jing Wang

    2013-01-01

    Full Text Available The image reconstruction for electrical impedance tomography (EIT mathematically is a typed nonlinear ill-posed inverse problem. In this paper, a novel iteration regularization scheme based on the homotopy perturbation technique, namely, homotopy perturbation inversion method, is applied to investigate the EIT image reconstruction problem. To verify the feasibility and effectiveness, simulations of image reconstruction have been performed in terms of considering different locations, sizes, and numbers of the inclusions, as well as robustness to data noise. Numerical results indicate that this method can overcome the numerical instability and is robust to data noise in the EIT image reconstruction. Moreover, compared with the classical Landweber iteration method, our approach improves the convergence rate. The results are promising.

  19. Skull defect reconstruction based on a new hybrid level set.

    Science.gov (United States)

    Zhang, Ziqun; Zhang, Ran; Song, Zhijian

    2014-01-01

    Skull defect reconstruction is an important aspect of surgical repair. Historically, a skull defect prosthesis was created by the mirroring technique, surface fitting, or formed templates. These methods are not based on the anatomy of the individual patient's skull, and therefore, the prosthesis cannot precisely correct the defect. This study presented a new hybrid level set model, taking into account both the global optimization region information and the local accuracy edge information, while avoiding re-initialization during the evolution of the level set function. Based on the new method, a skull defect was reconstructed, and the skull prosthesis was produced by rapid prototyping technology. This resulted in a skull defect prosthesis that well matched the skull defect with excellent individual adaptation.

  20. Development of an ANN optimized mucoadhesive buccal tablet containing flurbiprofen and lidocaine for dental pain

    Directory of Open Access Journals (Sweden)

    Hussain Amjad

    2016-06-01

    Full Text Available A novel mucoadhesive buccal tablet containing flurbiprofen (FLB and lidocaine HCl (LID was prepared to relieve dental pain. Tablet formulations (F1-F9 were prepared using variable quantities of mucoadhesive agents, hydroxypropyl methyl cellulose (HPMC and sodium alginate (SA. The formulations were evaluated for their physicochemical properties, mucoadhesive strength and mucoadhesion time, swellability index and in vitro release of active agents. Release of both drugs depended on the relative ratio of HPMC:SA. However, mucoadhesive strength and mucoadhesion time were better in formulations, containing higher proportions of HPMC compared to SA. An artificial neural network (ANN approach was applied to optimise formulations based on known effective parameters (i.e., mucoadhesive strength, mucoadhesion time and drug release, which proved valuable. This study indicates that an effective buccal tablet formulation of flurbiprofen and lidocaine can be prepared via an optimized ANN approach.

  1. Hiina tervendus / kommenteerivad Anne, Julia, Weihong Song, Fagang Ren

    Index Scriptorium Estoniae

    2013-01-01

    Tallinnas Tulika 19 asuvast Bai Lan Hiina massaažisalongist, kus ravitakse kuputeraapia, gua sha kraapimisplaatide, moksa, nõelravi ja punktmassaaži abil. Tui na massaaži ja hiina loodusteraapia protseduure kommenteerivad spetsialistid ning patsiendid Anne ja Julia

  2. Vene ja prantsuse kunsti imetlemisest Londonis / Ann Alari

    Index Scriptorium Estoniae

    Alari, Ann

    2008-01-01

    Näitus "Venemaalt pärit prantsuse ja vene shedöövrid" kuningliku kunstiakadeemia saalides Londonis. Vaatluse all oli ajavahemik 1870 kuni 1925. Tööd olid pärit Moskvas elanud tekstiilitöösturitest suurärimeeste Sergei Shtshukini ja Ivan Morozovi kogudest, mis 1917. a. natsionaliseeriti. Kuraator Ann Dumas

  3. Modeling and forecasting energy flow between national power grid and a solar–wind–pumped-hydroelectricity (PV–WT–PSH) energy source

    International Nuclear Information System (INIS)

    Jurasz, Jakub

    2017-01-01

    Highlights: • A MINLP model for grid connected PV-WT-PSH is proposed. • A method for simulating and forecasting energy flow has been developed. • A probabilistic model is compared to artificial neural network approach. - Abstract: The structure of modern energy systems has evolved based on the assumption that it is the demand side which is variable, whilst the supply side must adjust to forecasted (or unforecasted) changes. But the increasing role of variable renewable energy sources (VRES) has led to a situation in which the supply side is also becoming more and more unpredictable. To date, various approaches have been proposed to overcome this impediment. This paper aims to combine mixed integer modeling with an Artificial Neural Networks (ANN) forecasting method in order to predict the volume of energy flow between a local balancing area which is using PV–WT–PSH and the national power system (NPS). Calculations has been performed based on the hourly time series of wind speed, irradiation and energy demand. The results indicate that both probabilistic and ANN models generate comparably accurate forecasts; however, the opportunity for improvement in the former appears to be significantly greater. The mean prediction error (for a one hour ahead forecasts) for the best model was 0.15 MW h, which amounts to less than 0.2% of a mean hourly energy demand of the considered energy consumer. The proposed approach has huge potential to reduce the impact of VRES on the NPS operation as well as can be used to facilitate the process of their integration and increase their share in covering energy demand.

  4. INDOOR MODELLING FROM SLAM-BASED LASER SCANNER: DOOR DETECTION TO ENVELOPE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    L. Díaz-Vilariño

    2017-09-01

    Full Text Available Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.

  5. Breast reconstruction after mastectomy

    Directory of Open Access Journals (Sweden)

    Daniel eSchmauss

    2016-01-01

    Full Text Available Breast cancer is the leading cause of cancer death in women worldwide. Its surgical approach has become less and less mutilating in the last decades. However, the overall number of breast reconstructions has significantly increased lately. Nowadays breast reconstruction should be individualized at its best, first of all taking into consideration oncological aspects of the tumor, neo-/adjuvant treatment and genetic predisposition, but also its timing (immediate versus delayed breast reconstruction, as well as the patient’s condition and wish. This article gives an overview over the various possibilities of breast reconstruction, including implant- and expander-based reconstruction, flap-based reconstruction (vascularized autologous tissue, the combination of implant and flap, reconstruction using non-vascularized autologous fat, as well as refinement surgery after breast reconstruction.

  6. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  7. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    International Nuclear Information System (INIS)

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated

  8. The calibration and electron energy reconstruction of the BGO ECAL of the DAMPE detector

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Zhiyong; Wang, Chi; Dong, Jianing; Wei, Yifeng [State Key Laboratory of Particle Detection and Electronics (IHEP-USTC), University of Science and Technology of China, Hefei 230026 (China); Wen, Sicheng [Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210000 (China); Zhang, Yunlong, E-mail: ylzhang@ustc.edu.cn [State Key Laboratory of Particle Detection and Electronics (IHEP-USTC), University of Science and Technology of China, Hefei 230026 (China); Li, Zhiying; Feng, Changqing; Gao, Shanshan; Shen, ZhongTao; Zhang, Deliang; Zhang, Junbin; Wang, Qi; Ma, SiYuan; Yang, Di; Jiang, Di [State Key Laboratory of Particle Detection and Electronics (IHEP-USTC), University of Science and Technology of China, Hefei 230026 (China); Chen, Dengyi; Hu, Yiming [Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210000 (China); Huang, Guangshun; Wang, Xiaolian [State Key Laboratory of Particle Detection and Electronics (IHEP-USTC), University of Science and Technology of China, Hefei 230026 (China); and others

    2016-11-11

    The DArk Matter Particle Explorer (DAMPE) is a space experiment designed to search for dark matter indirectly by measuring the spectra of photons, electrons, and positrons up to 10 TeV. The BGO electromagnetic calorimeter (ECAL) is its main sub-detector for energy measurement. In this paper, the instrumentation and development of the BGO ECAL is briefly described. The calibration on the ground, including the pedestal, minimum ionizing particle (MIP) peak, dynode ratio, and attenuation length with the cosmic rays and beam particles is discussed in detail. Also, the energy reconstruction results of the electrons from the beam test are presented.

  9. ANN-PSO Integrated Optimization Methodology for Intelligent Control of MMC Machining

    Science.gov (United States)

    Chandrasekaran, Muthumari; Tamang, Santosh

    2017-08-01

    Metal Matrix Composites (MMC) show improved properties in comparison with non-reinforced alloys and have found increased application in automotive and aerospace industries. The selection of optimum machining parameters to produce components of desired surface roughness is of great concern considering the quality and economy of manufacturing process. In this study, a surface roughness prediction model for turning Al-SiCp MMC is developed using Artificial Neural Network (ANN). Three turning parameters viz., spindle speed ( N), feed rate ( f) and depth of cut ( d) were considered as input neurons and surface roughness was an output neuron. ANN architecture having 3 -5 -1 is found to be optimum and the model predicts with an average percentage error of 7.72 %. Particle Swarm Optimization (PSO) technique is used for optimizing parameters to minimize machining time. The innovative aspect of this work is the development of an integrated ANN-PSO optimization method for intelligent control of MMC machining process applicable to manufacturing industries. The robustness of the method shows its superiority for obtaining optimum cutting parameters satisfying desired surface roughness. The method has better convergent capability with minimum number of iterations.

  10. Parallelized Kalman-Filter-Based Reconstruction of Particle Tracks on Many-Core Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Cerati, Giuseppe [Fermilab; Elmer, Peter [Princeton U.; Krutelyov, Slava [UC, San Diego; Lantz, Steven [Cornell U., Phys. Dept.; Lefebvre, Matthieu [Princeton U.; Masciovecchio, Mario [UC, San Diego; McDermott, Kevin [Cornell U., Phys. Dept.; Riley, Daniel [Cornell U., Phys. Dept.; Tadel, Matevž [UC, San Diego; Wittich, Peter [Cornell U., Phys. Dept.; Würthwein, Frank [UC, San Diego; Yagil, Avi [UC, San Diego

    2017-11-16

    Faced with physical and energy density limitations on clock speed, contemporary microprocessor designers have increasingly turned to on-chip parallelism for performance gains. Examples include the Intel Xeon Phi, GPGPUs, and similar technologies. Algorithms should accordingly be designed with ample amounts of fine-grained parallelism if they are to realize the full performance of the hardware. This requirement can be challenging for algorithms that are naturally expressed as a sequence of small-matrix operations, such as the Kalman filter methods widely in use in high-energy physics experiments. In the High-Luminosity Large Hadron Collider (HL-LHC), for example, one of the dominant computational problems is expected to be finding and fitting charged-particle tracks during event reconstruction; today, the most common track-finding methods are those based on the Kalman filter. Experience at the LHC, both in the trigger and offline, has shown that these methods are robust and provide high physics performance. Previously we reported the significant parallel speedups that resulted from our efforts to adapt Kalman-filter-based tracking to many-core architectures such as Intel Xeon Phi. Here we report on how effectively those techniques can be applied to more realistic detector configurations and event complexity.

  11. Neural Network Based Maximum Power Point Tracking Control with Quadratic Boost Converter for PMSG—Wind Energy Conversion System

    Directory of Open Access Journals (Sweden)

    Ramji Tiwari

    2018-02-01

    Full Text Available This paper proposes an artificial neural network (ANN based maximum power point tracking (MPPT control strategy for wind energy conversion system (WECS implemented with a DC/DC converter. The proposed topology utilizes a radial basis function network (RBFN based neural network control strategy to extract the maximum available power from the wind velocity. The results are compared with a classical Perturb and Observe (P&O method and Back propagation network (BPN method. In order to achieve a high voltage rating, the system is implemented with a quadratic boost converter and the performance of the converter is validated with a boost and single ended primary inductance converter (SEPIC. The performance of the MPPT technique along with a DC/DC converter is demonstrated using MATLAB/Simulink.

  12. Greek long-term energy consumption prediction using artificial neural networks

    International Nuclear Information System (INIS)

    Ekonomou, L.

    2010-01-01

    In this paper artificial neural networks (ANN) are addressed in order the Greek long-term energy consumption to be predicted. The multilayer perceptron model (MLP) has been used for this purpose by testing several possible architectures in order to be selected the one with the best generalizing ability. Actual recorded input and output data that influence long-term energy consumption were used in the training, validation and testing process. The developed ANN model is used for the prediction of 2005-2008, 2010, 2012 and 2015 Greek energy consumption. The produced ANN results for years 2005-2008 were compared with the results produced by a linear regression method, a support vector machine method and with real energy consumption records showing a great accuracy. The proposed approach can be useful in the effective implementation of energy policies, since accurate predictions of energy consumption affect the capital investment, the environmental quality, the revenue analysis, the market research management, while conserve at the same time the supply security. Furthermore it constitutes an accurate tool for the Greek long-term energy consumption prediction problem, which up today has not been faced effectively.

  13. Track reconstruction in liquid hydrogen ionization chamber

    International Nuclear Information System (INIS)

    Balbekov, V.I.; Baranov, A.M.; Krasnokutski, R.N.; Perelygin, V.P.; Rasuvaev, E.A.; Shuvalov, R.S.; Zhigunov, V.P.; Lebedenko, V.N.; Stern, B.E.

    1979-01-01

    It is shown that particle track parameters can be reconstructed by the currents in the anode cells of the ionization chamber. The calculations are carried out for the chamber with 10 cm anode-cathode gap width. For simplicity a two-dimensional chamber model is used. To make the calculations simpler the charge density along the track is considered to be constant and equal to 10 4 electrons/mm. The drift velocity of electrons is assumed to be 5x10 6 cm/s. The anode is devided into cells 2 cm in width. The events in the chamber is defined with the coordinates X and Z of the event vertex, polar angles THETA of each track and track length l. The coordinates x, y and track angle THETA are reconstructed by currents with errors of up to millimetre and milliradian. The reconstruction errors are proportional to noise levels of electronics and also depend on the track geometry and argon purification. The energy resolution of the chamber is calculated for high energy electrons by means of computer program based on a Monter-Carlo method. The conclusion is made that the energy resolution depends on the gap width as a square root. Two ways to solve the track reconstruction problem are considered: 1. the initial charge density is determined by measuring the charges induced in anode strips at some discrete moments of time; 2. the evaluation of the parameters ia made by traditional minimization technique. The second method is applicable only for a not very large number of hypothesis, but it is less time consuming

  14. Prediction of scour below submerged pipeline crossing a river using ANN.

    Science.gov (United States)

    Azamathulla, H M; Zakaria, Nor Azazi

    2011-01-01

    The process involved in the local scour below pipelines is so complex that it makes it difficult to establish a general empirical model to provide accurate estimation for scour. This paper describes the use of artificial neural networks (ANN) to estimate the pipeline scour depth. The data sets of laboratory measurements were collected from published works and used to train the network or evolve the program. The developed networks were validated by using the observations that were not involved in training. The performance of ANN was found to be more effective when compared with the results of regression equations in predicting the scour depth around pipelines.

  15. The benefit of modeled ozone data for the reconstruction of a 99-year UV radiation time series

    Science.gov (United States)

    Junk, J.; Feister, U.; Helbig, A.; GöRgen, K.; Rozanov, E.; KrzyśCin, J. W.; Hoffmann, L.

    2012-08-01

    Solar erythemal UV radiation (UVER) is highly relevant for numerous biological processes that affect plants, animals, and human health. Nevertheless, long-term UVER records are scarce. As significant declines in the column ozone concentration were observed in the past and a recovery of the stratospheric ozone layer is anticipated by the middle of the 21st century, there is a strong interest in the temporal variation of UVERtime series. Therefore, we combined ground-based measurements of different meteorological variables with modeled ozone data sets to reconstruct time series of daily totals of UVER at the Meteorological Observatory, Potsdam, Germany. Artificial neural networks were trained with measured UVER, sunshine duration, the day of year, measured and modeled total column ozone, as well as the minimum solar zenith angle. This allows for the reconstruction of daily totals of UVERfor the period from 1901 to 1999. Additionally, analyses of the long-term variations from 1901 until 1999 of the reconstructed, new UVER data set are presented. The time series of monthly and annual totals of UVERprovide a long-term meteorological basis for epidemiological investigations in human health and occupational medicine for the region of Potsdam and Berlin. A strong benefit of our ANN-approach is the fact that it can be easily adapted to different geographical locations, as successfully tested in the framework of the COSTAction 726.

  16. Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D. [UCSF Benioff Children' s Hospital, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States)

    2014-07-15

    Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model-based

  17. ℓ0 Gradient Minimization Based Image Reconstruction for Limited-Angle Computed Tomography.

    Directory of Open Access Journals (Sweden)

    Wei Yu

    Full Text Available In medical and industrial applications of computed tomography (CT imaging, limited by the scanning environment and the risk of excessive X-ray radiation exposure imposed to the patients, reconstructing high quality CT images from limited projection data has become a hot topic. X-ray imaging in limited scanning angular range is an effective imaging modality to reduce the radiation dose to the patients. As the projection data available in this modality are incomplete, limited-angle CT image reconstruction is actually an ill-posed inverse problem. To solve the problem, image reconstructed by conventional filtered back projection (FBP algorithm frequently results in conspicuous streak artifacts and gradual changed artifacts nearby edges. Image reconstruction based on total variation minimization (TVM can significantly reduce streak artifacts in few-view CT, but it suffers from the gradual changed artifacts nearby edges in limited-angle CT. To suppress this kind of artifacts, we develop an image reconstruction algorithm based on ℓ0 gradient minimization for limited-angle CT in this paper. The ℓ0-norm of the image gradient is taken as the regularization function in the framework of developed reconstruction model. We transformed the optimization problem into a few optimization sub-problems and then, solved these sub-problems in the manner of alternating iteration. Numerical experiments are performed to validate the efficiency and the feasibility of the developed algorithm. From the statistical analysis results of the performance evaluations peak signal-to-noise ratio (PSNR and normalized root mean square distance (NRMSD, it shows that there are significant statistical differences between different algorithms from different scanning angular ranges (p<0.0001. From the experimental results, it also indicates that the developed algorithm outperforms classical reconstruction algorithms in suppressing the streak artifacts and the gradual changed

  18. Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D.

    2014-01-01

    Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model-based

  19. Signal reconstruction in wireless sensor networks based on a cubature Kalman particle filter

    International Nuclear Information System (INIS)

    Huang Jin-Wang; Feng Jiu-Chao

    2014-01-01

    For solving the issues of the signal reconstruction of nonlinear non-Gaussian signals in wireless sensor networks (WSNs), a new signal reconstruction algorithm based on a cubature Kalman particle filter (CKPF) is proposed in this paper. We model the reconstruction signal first and then use the CKPF to estimate the signal. The CKPF uses a cubature Kalman filter (CKF) to generate the importance proposal distribution of the particle filter and integrates the latest observation, which can approximate the true posterior distribution better. It can improve the estimation accuracy. CKPF uses fewer cubature points than the unscented Kalman particle filter (UKPF) and has less computational overheads. Meanwhile, CKPF uses the square root of the error covariance for iterating and is more stable and accurate than the UKPF counterpart. Simulation results show that the algorithm can reconstruct the observed signals quickly and effectively, at the same time consuming less computational time and with more accuracy than the method based on UKPF. (general)

  20. A Total Variation Regularization Based Super-Resolution Reconstruction Algorithm for Digital Video

    Directory of Open Access Journals (Sweden)

    Zhang Liangpei

    2007-01-01

    Full Text Available Super-resolution (SR reconstruction technique is capable of producing a high-resolution image from a sequence of low-resolution images. In this paper, we study an efficient SR algorithm for digital video. To effectively deal with the intractable problems in SR video reconstruction, such as inevitable motion estimation errors, noise, blurring, missing regions, and compression artifacts, the total variation (TV regularization is employed in the reconstruction model. We use the fixed-point iteration method and preconditioning techniques to efficiently solve the associated nonlinear Euler-Lagrange equations of the corresponding variational problem in SR. The proposed algorithm has been tested in several cases of motion and degradation. It is also compared with the Laplacian regularization-based SR algorithm and other TV-based SR algorithms. Experimental results are presented to illustrate the effectiveness of the proposed algorithm.

  1. Determination of liquid's molecular interference function based on X-ray diffraction and dual-energy CT in security screening

    International Nuclear Information System (INIS)

    Zhang, Li; YangDai, Tianyi

    2016-01-01

    A method for deriving the molecular interference function (MIF) of an unknown liquid for security screening is presented. Based on the effective atomic number reconstructed from dual-energy computed tomography (CT), equivalent molecular formula of the liquid is estimated. After a series of optimizations, the MIF and a new effective atomic number are finally obtained from the X-ray diffraction (XRD) profile. The proposed method generates more accurate results with less sensitivity to the noise and data deficiency of the XRD profile. - Highlights: • EDXRD combined with dual-energy CT has been utilized for deriving the molecular interference function of an unknown liquid. • The liquid's equivalent molecular formula is estimated based on the effective atomic number reconstructed from dual-energy CT. • The proposed method provides two ways to estimate the molecular interference function: the simplified way and accurate way. • A new effective atomic number of the liquid could be obtained.

  2. Hybrid light transport model based bioluminescence tomography reconstruction for early gastric cancer detection

    Science.gov (United States)

    Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie

    2012-03-01

    Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.

  3. Artificial neural network (ANN) approach for modeling Zn(II) adsorption in batch process

    Energy Technology Data Exchange (ETDEWEB)

    Yildiz, Sayiter [Engineering Faculty, Cumhuriyet University, Sivas (Turkmenistan)

    2017-09-15

    Artificial neural networks (ANN) were applied to predict adsorption efficiency of peanut shells for the removal of Zn(II) ions from aqueous solutions. Effects of initial pH, Zn(II) concentrations, temperature, contact duration and adsorbent dosage were determined in batch experiments. The sorption capacities of the sorbents were predicted with the aid of equilibrium and kinetic models. The Zn(II) ions adsorption onto peanut shell was better defined by the pseudo-second-order kinetic model, for both initial pH, and temperature. The highest R{sup 2} value in isotherm studies was obtained from Freundlich isotherm for the inlet concentration and from Temkin isotherm for the sorbent amount. The high R{sup 2} values prove that modeling the adsorption process with ANN is a satisfactory approach. The experimental results and the predicted results by the model with the ANN were found to be highly compatible with each other.

  4. Artificial neural network (ANN) approach for modeling Zn(II) adsorption in batch process

    International Nuclear Information System (INIS)

    Yildiz, Sayiter

    2017-01-01

    Artificial neural networks (ANN) were applied to predict adsorption efficiency of peanut shells for the removal of Zn(II) ions from aqueous solutions. Effects of initial pH, Zn(II) concentrations, temperature, contact duration and adsorbent dosage were determined in batch experiments. The sorption capacities of the sorbents were predicted with the aid of equilibrium and kinetic models. The Zn(II) ions adsorption onto peanut shell was better defined by the pseudo-second-order kinetic model, for both initial pH, and temperature. The highest R"2 value in isotherm studies was obtained from Freundlich isotherm for the inlet concentration and from Temkin isotherm for the sorbent amount. The high R"2 values prove that modeling the adsorption process with ANN is a satisfactory approach. The experimental results and the predicted results by the model with the ANN were found to be highly compatible with each other.

  5. Determination of zinc oxide content of mineral medicine calamine using near-infrared spectroscopy based on MIV and BP-ANN algorithm

    Science.gov (United States)

    Zhang, Xiaodong; Chen, Long; Sun, Yangbo; Bai, Yu; Huang, Bisheng; Chen, Keli

    2018-03-01

    Near-infrared (NIR) spectroscopy has been widely used in the analysis fields of traditional Chinese medicine. It has the advantages of fast analysis, no damage to samples and no pollution. In this research, a fast quantitative model for zinc oxide (ZnO) content in mineral medicine calamine was explored based on NIR spectroscopy. NIR spectra of 57 batches of calamine samples were collected and the first derivative (FD) method was adopted for conducting spectral pretreatment. The content of ZnO in calamine sample was determined using ethylenediaminetetraacetic acid (EDTA) titration and taken as reference value of NIR spectroscopy. 57 batches of calamine samples were categorized into calibration and prediction set using the Kennard-Stone (K-S) algorithm. Firstly, in the calibration set, to calculate the correlation coefficient (r) between the absorbance value and the ZnO content of corresponding samples at each wave number. Next, according to the square correlation coefficient (r2) value to obtain the top 50 wave numbers to compose the characteristic spectral bands (4081.8-4096.3, 4188.9-4274.7, 4335.4, 4763.6,4794.4-4802.1, 4809.9, 4817.6-4875.4 cm- 1), which were used to establish the quantitative model of ZnO content using back propagation artificial neural network (BP-ANN) algorithm. Then, the 50 wave numbers were operated by the mean impact value (MIV) algorithm to choose wave numbers whose absolute value of MIV greater than or equal to 25, to obtain the optimal characteristic spectral bands (4875.4-4836.9, 4223.6-4080.9 cm- 1). And then, both internal cross and external validation were used to screen the number of hidden layer nodes of BP-ANN. Finally, the number 4 of hidden layer nodes was chosen as the best. At last, the BP-ANN model was found to enjoy a high accuracy and strong forecasting capacity for analyzing ZnO content in calamine samples ranging within 42.05-69.98%, with relative mean square error of cross validation (RMSECV) of 1.66% and coefficient of

  6. Emittance reconstruction technique for the Linac4 high energy commissioning

    CERN Document Server

    Lallement, JB; Posocco, PA

    2012-01-01

    Linac4 is a new 160 MeV linear accelerator for negative Hydrogen ions (H-) presently under construction which will replace the 50 MeV proton Linac2 as injector for the CERN proton accelerator complex. Linac4 is 80 meters long and comprises a Low Energy Beam Transport line, a 3 MeV RFQ, a MEBT, a 50 MeV DTL, a 100 MeV CCDTL and a PIMS up to 160 MeV. The commissioning of the Linac is scheduled to start in 2013. It will be divided into several steps corresponding to the commissioning of the different accelerating structures. A temporary measurement bench will be dedicated to the high energy commissioning from 30 to 100 MeV (DTL tanks 2 and 3, and CCDTL). The commissioning of the PIMS will be done using the permanent equipment installed in between the end of the Linac and the main dump. This note describes the technique we will use for reconstructing the transverse emittances and the expected results.

  7. Imaging reconstruction based on improved wavelet denoising combined with parallel-beam filtered back-projection algorithm

    Science.gov (United States)

    Ren, Zhong; Liu, Guodong; Huang, Zhen

    2012-11-01

    The image reconstruction is a key step in medical imaging (MI) and its algorithm's performance determinates the quality and resolution of reconstructed image. Although some algorithms have been used, filter back-projection (FBP) algorithm is still the classical and commonly-used algorithm in clinical MI. In FBP algorithm, filtering of original projection data is a key step in order to overcome artifact of the reconstructed image. Since simple using of classical filters, such as Shepp-Logan (SL), Ram-Lak (RL) filter have some drawbacks and limitations in practice, especially for the projection data polluted by non-stationary random noises. So, an improved wavelet denoising combined with parallel-beam FBP algorithm is used to enhance the quality of reconstructed image in this paper. In the experiments, the reconstructed effects were compared between the improved wavelet denoising and others (directly FBP, mean filter combined FBP and median filter combined FBP method). To determine the optimum reconstruction effect, different algorithms, and different wavelet bases combined with three filters were respectively test. Experimental results show the reconstruction effect of improved FBP algorithm is better than that of others. Comparing the results of different algorithms based on two evaluation standards i.e. mean-square error (MSE), peak-to-peak signal-noise ratio (PSNR), it was found that the reconstructed effects of the improved FBP based on db2 and Hanning filter at decomposition scale 2 was best, its MSE value was less and the PSNR value was higher than others. Therefore, this improved FBP algorithm has potential value in the medical imaging.

  8. Temporalis myofascial flap for primary cranial base reconstruction after tumor resection.

    Science.gov (United States)

    Eldaly, Ahmed; Magdy, Emad A; Nour, Yasser A; Gaafar, Alaa H

    2008-07-01

    To evaluate the use of the temporalis myofascial flap in primary cranial base reconstruction following surgical tumor ablation and to explain technical issues, potential complications, and donor site consequences along with their management. Retrospective case series. Tertiary referral center. Forty-one consecutive patients receiving primary temporalis myofascial flap reconstructions following cranial base tumor resections in a 4-year period. Flap survival, postoperative complications, and donor site morbidity. Patients included 37 males and 4 females ranging in age from 10 to 65 years. Two patients received preoperative and 18 postoperative radiation therapy. Patient follow-up ranged from 4 to 39 months. The whole temporalis muscle was used in 26 patients (63.4%) and only part of a coronally split muscle was used in 15 patients (36.6%). Nine patients had primary donor site reconstruction using a Medpor((R)) (Porex Surgical, Inc., Newnan, GA) temporal fossa implant; these had excellent aesthetic results. There were no cases of complete flap loss. Partial flap dehiscence was seen in six patients (14.6%); only two required surgical débridement. None of the patients developed cerebrospinal leaks or meningitis. One patient was left with complete paralysis of the temporal branch of the facial nerve. Three patients (all had received postoperative irradiation) developed permanent trismus. The temporalis myofascial flap was found to be an excellent reconstructive alternative for a wide variety of skull base defects following tumor ablation. It is a very reliable, versatile flap that is usually available in the operative field with relatively low donor site aesthetic and functional morbidity.

  9. Establishing Base Elements of Perspective in Order to Reconstruct Architectural Buildings from Photographs

    Science.gov (United States)

    Dzwierzynska, Jolanta

    2017-12-01

    The use of perspective images, especially historical photographs for retrieving information about presented architectural environment is a fast developing field recently. The photography image is a perspective image with secure geometrical connection with reality, therefore it is possible to reverse this process. The aim of the herby study is establishing requirements which a photographic perspective representation should meet for a reconstruction purpose, as well as determination of base elements of perspective such as a horizon line and a circle of depth, which is a key issue in any reconstruction. The starting point in the reconstruction process is geometrical analysis of the photograph, especially determination of the kind of perspective projection applied, which is defined by the building location towards a projection plane. Next, proper constructions can be used. The paper addresses the problem of establishing base elements of perspective on the basis of the photograph image in the case when camera calibration is impossible to establish. It presents different geometric construction methods selected dependently on the starting assumptions. Therefore, the methods described in the paper seem to be universal. Moreover, they can be used even in the case of poor quality photographs with poor perspective geometry. Such constructions can be realized with computer aid when the photographs are in digital form as it is presented in the paper. The accuracy of the applied methods depends on the photography image accuracy, as well as drawing accuracy, however, it is sufficient for further reconstruction. Establishing base elements of perspective presented in the paper is especially useful in difficult cases of reconstruction, when one lacks information about reconstructed architectural form and it is necessary to lean on solid geometry.

  10. Q&A: Grace Anne Koppel, Living Well with COPD

    Science.gov (United States)

    ... their own lives back is the most rewarding thing we have ever done. Read More "The Challenge of COPD" Articles Q&A: Grace Anne Koppel, Living Well with COPD / What is COPD? / What Causes COPD? / Getting Tested / Am I at Risk? / COPD Quiz Fall ...

  11. 2017-2018 Travel Expense Reports for Mary Anne Chambers ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Chantal Taylor

    Ottawa. Airfare: $368.41. Other. Transportation: $69.95. Accommodation: $542.79. Meals and. Incidentals: $164.42. Other: $0.00. Total: $1,145.57. Comments: From residence in Thornhill, Ontario. 2017-2018 Travel Expense Reports for Mary. Anne Chambers, Governor, Chairperson of the. Human Resources Committee.

  12. 2017-2018 Travel Expense Reports for Mary Anne Chambers ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Chantal Taylor

    Ottawa. Airfare: $563.72. Other. Transportation: $74.26. Accommodation: $0.00. Meals and. Incidentals: $46.17. Other: $30.00. Total: $714.15. Comments: From residence in Thornhill, Ontario. 2017-2018 Travel Expense Reports for Mary. Anne Chambers, Governor, Chairperson of the. Human Resources Committee.

  13. Estimating building energy consumption using extreme learning machine method

    International Nuclear Information System (INIS)

    Naji, Sareh; Keivani, Afram; Shamshirband, Shahaboddin; Alengaram, U. Johnson; Jumaat, Mohd Zamin; Mansor, Zulkefli; Lee, Malrey

    2016-01-01

    The current energy requirements of buildings comprise a large percentage of the total energy consumed around the world. The demand of energy, as well as the construction materials used in buildings, are becoming increasingly problematic for the earth's sustainable future, and thus have led to alarming concern. The energy efficiency of buildings can be improved, and in order to do so, their operational energy usage should be estimated early in the design phase, so that buildings are as sustainable as possible. An early energy estimate can greatly help architects and engineers create sustainable structures. This study proposes a novel method to estimate building energy consumption based on the ELM (Extreme Learning Machine) method. This method is applied to building material thicknesses and their thermal insulation capability (K-value). For this purpose up to 180 simulations are carried out for different material thicknesses and insulation properties, using the EnergyPlus software application. The estimation and prediction obtained by the ELM model are compared with GP (genetic programming) and ANNs (artificial neural network) models for accuracy. The simulation results indicate that an improvement in predictive accuracy is achievable with the ELM approach in comparison with GP and ANN. - Highlights: • Buildings consume huge amounts of energy for operation. • Envelope materials and insulation influence building energy consumption. • Extreme learning machine is used to estimate energy usage of a sample building. • The key effective factors in this study are insulation thickness and K-value.

  14. GNSS troposphere tomography based on two-step reconstructions using GPS observations and COSMIC profiles

    Directory of Open Access Journals (Sweden)

    P. Xia

    2013-10-01

    Full Text Available Traditionally, balloon-based radiosonde soundings are used to study the spatial distribution of atmospheric water vapour. However, this approach cannot be frequently employed due to its high cost. In contrast, GPS tomography technique can obtain water vapour in a high temporal resolution. In the tomography technique, an iterative or non-iterative reconstruction algorithm is usually utilised to overcome rank deficiency of observation equations for water vapour inversion. However, the single iterative or non-iterative reconstruction algorithm has their limitations. For instance, the iterative reconstruction algorithm requires accurate initial values of water vapour while the non-iterative reconstruction algorithm needs proper constraint conditions. To overcome these drawbacks, we present a combined iterative and non-iterative reconstruction approach for the three-dimensional (3-D water vapour inversion using GPS observations and COSMIC profiles. In this approach, the non-iterative reconstruction algorithm is first used to estimate water vapour density based on a priori water vapour information derived from COSMIC radio occultation data. The estimates are then employed as initial values in the iterative reconstruction algorithm. The largest advantage of this approach is that precise initial values of water vapour density that are essential in the iterative reconstruction algorithm can be obtained. This combined reconstruction algorithm (CRA is evaluated using 10-day GPS observations in Hong Kong and COSMIC profiles. The test results indicate that the water vapor accuracy from CRA is 16 and 14% higher than that of iterative and non-iterative reconstruction approaches, respectively. In addition, the tomography results obtained from the CRA are further validated using radiosonde data. Results indicate that water vapour densities derived from the CRA agree with radiosonde results very well at altitudes above 2.5 km. The average RMS value of their

  15. Probability- and curve-based fractal reconstruction on 2D DEM terrain profile

    International Nuclear Information System (INIS)

    Lai, F.-J.; Huang, Y.M.

    2009-01-01

    Data compression and reconstruction has been playing important roles in information science and engineering. As part of them, image compression and reconstruction that mainly deal with image data set reduction for storage or transmission and data set restoration with least loss is still a topic deserved a great deal of works to focus on. In this paper we propose a new scheme in comparison with the well-known Improved Douglas-Peucker (IDP) method to extract characteristic or feature points of two-dimensional digital elevation model (2D DEM) terrain profile to compress data set. As for reconstruction in use of fractal interpolation, we propose a probability-based method to speed up the fractal interpolation execution to a rate as high as triple or even ninefold of the regular. In addition, a curve-based method is proposed in the study to determine the vertical scaling factor that much affects the generation of the interpolated data points to significantly improve the reconstruction performance. Finally, an evaluation is made to show the advantage of employing the proposed new method to extract characteristic points associated with our novel fractal interpolation scheme.

  16. Professor Anne Khademian named National Academy of Public Administration Fellow

    OpenAIRE

    Chadwick, Heather Riley

    2009-01-01

    Anne Khademian, professor with Virginia Tech's Center for Public Administration and Policy, School of Public and International Affairs, at the Alexandria, Va., campus has been elected a National Academy of Public Administration (NAPA) Fellow.

  17. La légitimation du rock en URSS dans les années 1970-1980

    OpenAIRE

    Zaytseva, Anna

    2017-01-01

    RésuméL'article analyse le chemin parcouru par le rock en URSS puis en Russie, de l'état d'une (sous)culture occidentalisée anglophone, ayant trouvé refuge dans les discothèques des années 1960-1970, jusqu'au canon du russkij rok actuel, devenu presque synonyme de poésie chantée, via sa légitimation progressive dans les années 1980. Celle-ci a été amorcée à la fin des années 1970 avec l’arrivée en force d'une nouvelle génération artistique au sein de l'underground rock (« nouvelle vague » de ...

  18. 3D Reconstruction of human bones based on dictionary learning.

    Science.gov (United States)

    Zhang, Binkai; Wang, Xiang; Liang, Xiao; Zheng, Jinjin

    2017-11-01

    An effective method for reconstructing a 3D model of human bones from computed tomography (CT) image data based on dictionary learning is proposed. In this study, the dictionary comprises the vertices of triangular meshes, and the sparse coefficient matrix indicates the connectivity information. For better reconstruction performance, we proposed a balance coefficient between the approximation and regularisation terms and a method for optimisation. Moreover, we applied a local updating strategy and a mesh-optimisation method to update the dictionary and the sparse matrix, respectively. The two updating steps are iterated alternately until the objective function converges. Thus, a reconstructed mesh could be obtained with high accuracy and regularisation. The experimental results show that the proposed method has the potential to obtain high precision and high-quality triangular meshes for rapid prototyping, medical diagnosis, and tissue engineering. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  19. Vegetarian Eco-feminist Consciousness in Carol Ann Duffy’s Poetry

    Directory of Open Access Journals (Sweden)

    Jie Zhou

    2015-07-01

    Full Text Available This paper discusses vegetarian eco-feminist consciousness in Carol Ann Duffy’s poetry by close analysis of two poems, namely “The Dolphins” and “A Healthy Diet” from her poem collection Standing Female Nude. The former is a dramatic monologue of a dolphin, which is exploited by people, and the latter is a dramatic monologue of an omnipotent observer in a restaurant. Both poems criticized the species-ism, and together, they showed the poet’s vegetarian eco-feminist consciousness. A close reading of the two poems from the eco-feminist perspective helps the reader understand why Carol Ann Duffy is honored as the first woman poet laureate in British history, and better understand vegetarian eco-feminism and its influence in British society. Keywords: eco-feminism; consciousness, species-ism, vegetarian, animal, diet

  20. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    Energy Technology Data Exchange (ETDEWEB)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina, E-mail: simon.felix@fhnw.ch, E-mail: roman.bolzern@fhnw.ch, E-mail: marina.battaglia@fhnw.ch [University of Applied Sciences and Arts Northwestern Switzerland FHNW, 5210 Windisch (Switzerland)

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS-CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS-CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  1. A Compressed Sensing-based Image Reconstruction Algorithm for Solar Flare X-Ray Observations

    Science.gov (United States)

    Felix, Simon; Bolzern, Roman; Battaglia, Marina

    2017-11-01

    One way of imaging X-ray emission from solar flares is to measure Fourier components of the spatial X-ray source distribution. We present a new compressed sensing-based algorithm named VIS_CS, which reconstructs the spatial distribution from such Fourier components. We demonstrate the application of the algorithm on synthetic and observed solar flare X-ray data from the Reuven Ramaty High Energy Solar Spectroscopic Imager satellite and compare its performance with existing algorithms. VIS_CS produces competitive results with accurate photometry and morphology, without requiring any algorithm- and X-ray-source-specific parameter tuning. Its robustness and performance make this algorithm ideally suited for the generation of quicklook images or large image cubes without user intervention, such as for imaging spectroscopy analysis.

  2. Interdependencies of acquisition, detection, and reconstruction techniques on the accuracy of iodine quantification in varying patient sizes employing dual-energy CT

    Energy Technology Data Exchange (ETDEWEB)

    Marin, Daniele; Pratts-Emanuelli, Jose J.; Mileto, Achille; Bashir, Mustafa R.; Nelson, Rendon C.; Boll, Daniel T. [Duke University Medical Center, Department of Radiology, Durham, NC (United States); Husarik, Daniela B. [University Hospital Zurich, Diagnostic and Interventional Radiology, Zurich (Switzerland)

    2014-10-03

    To assess the impact of patient habitus, acquisition parameters, detector efficiencies, and reconstruction techniques on the accuracy of iodine quantification using dual-source dual-energy CT (DECT). Two phantoms simulating small and large patients contained 20 iodine solutions mimicking vascular and parenchymal enhancement from saline isodensity to 400 HU and 30 iodine solutions simulating enhancement of the urinary collecting system from 400 to 2,000 HU. DECT acquisition (80/140 kVp and 100/140 kVp) was performed using two DECT systems equipped with standard and integrated electronics detector technologies. DECT raw datasets were reconstructed using filtered backprojection (FBP), and iterative reconstruction (SAFIRE I/V). Accuracy for iodine quantification was significantly higher for the small compared to the large phantoms (9.2 % ± 7.5 vs. 24.3 % ± 26.1, P = 0.0001), the integrated compared to the conventional detectors (14.8 % ± 20.6 vs. 18.8 % ± 20.4, respectively; P = 0.006), and SAFIRE V compared to SAFIRE I and FBP reconstructions (15.2 % ± 18.1 vs. 16.1 % ± 17.6 and 18.9 % ± 20.4, respectively; P ≤ 0.003). A significant synergism was observed when the most effective detector and reconstruction techniques were combined with habitus-adapted dual-energy pairs. In a second-generation dual-source DECT system, the accuracy of iodine quantification can be substantially improved by an optimal choice and combination of acquisition parameters, detector, and reconstruction techniques. (orig.)

  3. How to perform 3D reconstruction of skull base tumours.

    Science.gov (United States)

    Bonne, N-X; Dubrulle, F; Risoud, M; Vincent, C

    2017-04-01

    The surgical management of skull base lesions is difficult due to the complex anatomy of the region and the intimate relations between the lesion and adjacent nerves and vessels. Minimally invasive approaches are increasingly used in skull base surgery to ensure an optimal functional prognosis. Three-dimensional (3D) computed tomography (CT) reconstruction facilitates surgical planning by visualizing the anatomical relations of the lesions in all planes (arteries, veins, nerves, inner ear) and simulation of the surgical approach in the operating position. Helical CT angiography is performed with optimal timing of the injection in terms of tumour and vessel contrast enhancement. 3D definition of each structure is based on colour coding by automatic thresholding (bone, vessels) or manual segmentation on each slice (tumour, nerves, inner ear). Imaging is generally presented in 3 dimensions (superior, coronal, sagittal) with simulation of the surgical procedure (5 to 6 reconstructions in the operating position at different depths). Copyright © 2016. Published by Elsevier Masson SAS.

  4. GPU-based online track reconstruction for PANDA and application to the analysis of D→Kππ

    Energy Technology Data Exchange (ETDEWEB)

    Herten, Andreas

    2015-07-02

    The PANDA experiment is a new hadron physics experiment which is being built for the FAIR facility in Darmstadt, Germany. PANDA will employ a novel scheme of data acquisition: the experiment will reconstruct the full stream of events in realtime to make trigger decisions based on the event topology. An important part of this online event reconstruction is online track reconstruction. Online track reconstruction algorithms need to reconstruct particle trajectories in nearly realtime. This work uses the high-throughput devices of Graphics Processing Units to benchmark different online track reconstruction algorithms. The reconstruction of D{sup ±}→K{sup -+}π{sup ±}π{sup ±} is studied extensively and one online track reconstruction algorithm applied.

  5. The Effect of Sterile Acellular Dermal Matrix Use on Complication Rates in Implant-Based Immediate Breast Reconstructions

    Directory of Open Access Journals (Sweden)

    Jun Ho Lee

    2016-11-01

    Full Text Available BackgroundThe use of acellular dermal matrix (ADM in implant-based immediate breast reconstruction has been increasing. The current ADMs available for breast reconstruction are offered as aseptic or sterile. No published studies have compared aseptic and sterile ADM in implant-based immediate breast reconstruction. The authors performed a retrospective study to evaluate the outcomes of aseptic versus sterile ADM in implant-based immediate breast reconstruction.MethodsImplant-based immediate breast reconstructions with ADM conducted between April 2013 and January 2016 were included. The patients were divided into 2 groups: the aseptic ADM (AlloDerm group and the sterile ADM (MegaDerm group. Archived records were reviewed for demographic data and postoperative complication types and frequencies. The complications included were infection, flap necrosis, capsular contracture, seroma, hematoma, and explantation for any cause.ResultsTwenty patients were reconstructed with aseptic ADM, and 68 patients with sterile ADM. Rates of infection (15.0% vs. 10.3%, flap necrosis (5.0% vs. 7.4%, capsular contracture (20.0% vs. 14.7%, seroma (10.0% vs. 14.7%, hematoma (0% vs. 1.5%, and explantation (10.0% vs. 8.8% were not significantly different in the 2 groups.ConclusionsSterile ADM did not provide better results regarding infectious complications than aseptic ADM in implant-based immediate breast reconstruction.

  6. Three-dimension reconstruction based on spatial light modulator

    International Nuclear Information System (INIS)

    Deng Xuejiao; Zhang Nanyang; Zeng Yanan; Yin Shiliang; Wang Weiyu

    2011-01-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  7. Three-dimension reconstruction based on spatial light modulator

    Science.gov (United States)

    Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  8. Efficient L1 regularization-based reconstruction for fluorescent molecular tomography using restarted nonlinear conjugate gradient.

    Science.gov (United States)

    Shi, Junwei; Zhang, Bin; Liu, Fei; Luo, Jianwen; Bai, Jing

    2013-09-15

    For the ill-posed fluorescent molecular tomography (FMT) inverse problem, the L1 regularization can protect the high-frequency information like edges while effectively reduce the image noise. However, the state-of-the-art L1 regularization-based algorithms for FMT reconstruction are expensive in memory, especially for large-scale problems. An efficient L1 regularization-based reconstruction algorithm based on nonlinear conjugate gradient with restarted strategy is proposed to increase the computational speed with low memory consumption. The reconstruction results from phantom experiments demonstrate that the proposed algorithm can obtain high spatial resolution and high signal-to-noise ratio, as well as high localization accuracy for fluorescence targets.

  9. Image quality in children with low-radiation chest CT using adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Directory of Open Access Journals (Sweden)

    Jihang Sun

    Full Text Available OBJECTIVE: To evaluate noise reduction and image quality improvement in low-radiation dose chest CT images in children using adaptive statistical iterative reconstruction (ASIR and a full model-based iterative reconstruction (MBIR algorithm. METHODS: Forty-five children (age ranging from 28 days to 6 years, median of 1.8 years who received low-dose chest CT scans were included. Age-dependent noise index (NI was used for acquisition. Images were retrospectively reconstructed using three methods: MBIR, 60% of ASIR and 40% of conventional filtered back-projection (FBP, and FBP. The subjective quality of the images was independently evaluated by two radiologists. Objective noises in the left ventricle (LV, muscle, fat, descending aorta and lung field at the layer with the largest cross-section area of LV were measured, with the region of interest about one fourth to half of the area of descending aorta. Optimized signal-to-noise ratio (SNR was calculated. RESULT: In terms of subjective quality, MBIR images were significantly better than ASIR and FBP in image noise and visibility of tiny structures, but blurred edges were observed. In terms of objective noise, MBIR and ASIR reconstruction decreased the image noise by 55.2% and 31.8%, respectively, for LV compared with FBP. Similarly, MBIR and ASIR reconstruction increased the SNR by 124.0% and 46.2%, respectively, compared with FBP. CONCLUSION: Compared with FBP and ASIR, overall image quality and noise reduction were significantly improved by MBIR. MBIR image could reconstruct eligible chest CT images in children with lower radiation dose.

  10. A reconstruction algorithm for coherent scatter computed tomography based on filtered back-projection

    International Nuclear Information System (INIS)

    Stevendaal, U. van; Schlomka, J.-P.; Harding, A.; Grass, M.

    2003-01-01

    Coherent scatter computed tomography (CSCT) is a reconstructive x-ray imaging technique that yields the spatially resolved coherent-scatter form factor of the investigated object. Reconstruction from coherently scattered x-rays is commonly done using algebraic reconstruction techniques (ART). In this paper, we propose an alternative approach based on filtered back-projection. For the first time, a three-dimensional (3D) filtered back-projection technique using curved 3D back-projection lines is applied to two-dimensional coherent scatter projection data. The proposed algorithm is tested with simulated projection data as well as with projection data acquired with a demonstrator setup similar to a multi-line CT scanner geometry. While yielding comparable image quality as ART reconstruction, the modified 3D filtered back-projection algorithm is about two orders of magnitude faster. In contrast to iterative reconstruction schemes, it has the advantage that subfield-of-view reconstruction becomes feasible. This allows a selective reconstruction of the coherent-scatter form factor for a region of interest. The proposed modified 3D filtered back-projection algorithm is a powerful reconstruction technique to be implemented in a CSCT scanning system. This method gives coherent scatter CT the potential of becoming a competitive modality for medical imaging or nondestructive testing

  11. 2016-2017 Travel Expense Reports for Mary Anne Chambers ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Beata Bialic

    Date(s):. 2016-07-06. Destination(s):. Ottawa. Airfare: $482.11. Other. Transportation: $64.30. Accommodation: $0.00. Meals and. Incidentals: $25.28. Other: $0.00. Total: $571.69. Comments: 2016-2017 Travel Expense Reports for Mary. Anne Chambers, Governor, Chairperson of the. Human Resources Committee.

  12. Mapping brain circuits of reward and motivation: in the footsteps of Ann Kelley.

    Science.gov (United States)

    Richard, Jocelyn M; Castro, Daniel C; Difeliceantonio, Alexandra G; Robinson, Mike J F; Berridge, Kent C

    2013-11-01

    Ann Kelley was a scientific pioneer in reward neuroscience. Her many notable discoveries included demonstrations of accumbens/striatal circuitry roles in eating behavior and in food reward, explorations of limbic interactions with hypothalamic regulatory circuits, and additional interactions of motivation circuits with learning functions. Ann Kelley's accomplishments inspired other researchers to follow in her footsteps, including our own laboratory group. Here we describe results from several lines of our research that sprang in part from earlier findings by Kelley and colleagues. We describe hedonic hotspots for generating intense pleasure 'liking', separate identities of 'wanting' versus 'liking' systems, a novel role for dorsal neostriatum in generating motivation to eat, a limbic keyboard mechanism in nucleus accumbens for generating intense desire versus intense dread, and dynamic limbic transformations of learned memories into motivation. We describe how origins for each of these themes can be traced to fundamental contributions by Ann Kelley. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Method of incident low-energy gamma-ray direction reconstruction in the GAMMA-400 gamma-ray space telescope

    International Nuclear Information System (INIS)

    Kheymits, M D; Leonov, A A; Zverev, V G; Galper, A M; Arkhangelskaya, I V; Arkhangelskiy, A I; Yurkin, Yu T; Bakaldin, A V; Suchkov, S I; Topchiev, N P; Dalkarov, O D

    2016-01-01

    The GAMMA-400 gamma-ray space-based telescope has as its main goals to measure cosmic γ-ray fluxes and the electron-positron cosmic-ray component produced, theoretically, in dark-matter-particles decay or annihilation processes, to search for discrete γ-ray sources and study them in detail, to examine the energy spectra of diffuse γ-rays — both galactic and extragalactic — and to study gamma-ray bursts (GRBs) and γ-rays from the active Sun. Scientific goals of GAMMA-400 telescope require fine angular resolution. The telescope is of a pair-production type. In the converter-tracker, the incident gamma-ray photon converts into electron-positron pair in the tungsten layer and then the tracks are detected by silicon- strip position-sensitive detectors. Multiple scattering processes become a significant obstacle in the incident-gamma direction reconstruction for energies below several gigaelectronvolts. The method of utilising this process to improve the resolution is proposed in the presented work. (paper)

  14. Anne Sütü ve Mikrobiyota Gelişimi

    OpenAIRE

    GÜNEY, Rabiye; ÇINAR, Nursan

    2017-01-01

    Sağlıklı mikrobiyotanın etkisine yönelik yapılan çalışmalarda, çocukların gelecekteki sağlığı için mikrobiyota gelişiminin büyük önem taşıdığı vurgulanmaktadır. Astım, şeker hastalığı, obezite gibi birçok hastalığın zarar görmüş ya da gelişmemiş bağırsak mikrobiyotası ile yakın ilişkisi bulunmaktadır. Anne sütü, sağlıklı bir bağırsak mikrobiyotasının gelişmesi için bebeğe aktarılan çok sayıda non-patojen bakteriyi içinde barındırmaktadır. Bununla birlikte, anne sütündeki mikroorganizmaların n...

  15. Extended-Search, Bézier Curve-Based Lane Detection and Reconstruction System for an Intelligent Vehicle

    Directory of Open Access Journals (Sweden)

    Xiaoyun Huang

    2015-09-01

    Full Text Available To improve the real-time performance and detection rate of a Lane Detection and Reconstruction (LDR system, an extended-search-based lane detection method and a Bézier curve-based lane reconstruction algorithm are proposed in this paper. The extended-search-based lane detection method is designed to search boundary blocks from the initial position, in an upwards direction and along the lane, with small search areas including continuous search, discontinuous search and bending search in order to detect different lane boundaries. The Bézier curve-based lane reconstruction algorithm is employed to describe a wide range of lane boundary forms with comparatively simple expressions. In addition, two Bézier curves are adopted to reconstruct the lanes' outer boundaries with large curvature variation. The lane detection and reconstruction algorithm — including initial-blocks' determining, extended search, binarization processing and lane boundaries' fitting in different scenarios — is verified in road tests. The results show that this algorithm is robust against different shadows and illumination variations; the average processing time per frame is 13 ms. Significantly, it presents an 88.6% high-detection rate on curved lanes with large or variable curvatures, where the accident rate is higher than that of straight lanes.

  16. The modelling of lead removal from water by deep eutectic solvents functionalized CNTs: artificial neural network (ANN) approach.

    Science.gov (United States)

    Fiyadh, Seef Saadi; AlSaadi, Mohammed Abdulhakim; AlOmar, Mohamed Khalid; Fayaed, Sabah Saadi; Hama, Ako R; Bee, Sharifah; El-Shafie, Ahmed

    2017-11-01

    The main challenge in the lead removal simulation is the behaviour of non-linearity relationships between the process parameters. The conventional modelling technique usually deals with this problem by a linear method. The substitute modelling technique is an artificial neural network (ANN) system, and it is selected to reflect the non-linearity in the interaction among the variables in the function. Herein, synthesized deep eutectic solvents were used as a functionalized agent with carbon nanotubes as adsorbents of Pb 2+ . Different parameters were used in the adsorption study including pH (2.7 to 7), adsorbent dosage (5 to 20 mg), contact time (3 to 900 min) and Pb 2+ initial concentration (3 to 60 mg/l). The number of experimental trials to feed and train the system was 158 runs conveyed in laboratory scale. Two ANN types were designed in this work, the feed-forward back-propagation and layer recurrent; both methods are compared based on their predictive proficiency in terms of the mean square error (MSE), root mean square error, relative root mean square error, mean absolute percentage error and determination coefficient (R 2 ) based on the testing dataset. The ANN model of lead removal was subjected to accuracy determination and the results showed R 2 of 0.9956 with MSE of 1.66 × 10 -4 . The maximum relative error is 14.93% for the feed-forward back-propagation neural network model.

  17. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Science.gov (United States)

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  18. Three Dimensional Dynamic Model Based Wind Field Reconstruction from Lidar Data

    International Nuclear Information System (INIS)

    Raach, Steffen; Schlipf, David; Haizmann, Florian; Cheng, Po Wen

    2014-01-01

    Using the inflowing horizontal and vertical wind shears for individual pitch controller is a promising method if blade bending measurements are not available. Due to the limited information provided by a lidar system the reconstruction of shears in real-time is a challenging task especially for the horizontal shear in the presence of changing wind direction. The internal model principle has shown to be a promising approach to estimate the shears and directions in 10 minutes averages with real measurement data. The static model based wind vector field reconstruction is extended in this work taking into account a dynamic reconstruction model based on Taylor's Frozen Turbulence Hypothesis. The presented method provides time series over several seconds of the wind speed, shears and direction, which can be directly used in advanced optimal preview control. Therefore, this work is an important step towards the application of preview individual blade pitch control under realistic wind conditions. The method is tested using a turbulent wind field and a detailed lidar simulator. For the simulation, the turbulent wind field structure is flowing towards the lidar system and is continuously misaligned with respect to the horizontal axis of the wind turbine. Taylor's Frozen Turbulence Hypothesis is taken into account to model the wind evolution. For the reconstruction, the structure is discretized into several stages where each stage is reduced to an effective wind speed, superposed with a linear horizontal and vertical wind shear. Previous lidar measurements are shifted using again Taylor's Hypothesis. The wind field reconstruction problem is then formulated as a nonlinear optimization problem, which minimizes the residual between the assumed wind model and the lidar measurements to obtain the misalignment angle and the effective wind speed and the wind shears for each stage. This method shows good results in reconstructing the wind characteristics of a three

  19. Prediction of Tourism Demand in Iran by Using Artificial Neural Network (ANN and Supporting Vector Machine (SVR

    Directory of Open Access Journals (Sweden)

    Seyedehelham Sadatiseyedmahalleh

    2016-02-01

    Full Text Available This research examines and proves this effectiveness connected with artificial neural networks (ANNs as an alternative approach to the use of Support Vector Machine (SVR in the tourism research. This method can be used for the tourism industry to define the turism’s demands in Iran. The outcome reveals the use of ANNs in tourism research might result in better quotations when it comes to prediction bias and accuracy. Even more applications of ANNs in the context of tourism demand evaluation is needed to establish and validate the effects.

  20. 2016-2017 Travel Expense Reports for Margaret Ann Biggs ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Beata Bialic

    Purpose: Internal IDRC meetings. Date(s):. 2016-07-04 to 2016-07-06. Destination(s):. Ottawa. Airfare: $0.00. Other. Transportation: $39.00. Accommodation: $0.00. Meals and. Incidentals: $25.43. Other: $0.00. Total: $64.43. Comments: 2016-2017 Travel Expense Reports for. Margaret Ann Biggs, Chairperson.

  1. 2016-2017 Travel Expense Reports for Mary Anne Chambers ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    chantal taylor

    Purpose: Board meetings. Date(s):. 2017-03-19 to 2017-03-22. Destination(s):. Ottawa. Airfare: $121.05. Other. Transportation: $51.92. Accommodation: $926.48. Meals and. Incidentals: $190.40. Other: $0.00. Total: $1,289.85. Comments: 2016-2017 Travel Expense Reports for Mary. Anne Chambers, Governor ...

  2. Model-based respiratory motion compensation for emission tomography image reconstruction

    International Nuclear Information System (INIS)

    Reyes, M; Malandain, G; Koulibaly, P M; Gonzalez-Ballester, M A; Darcourt, J

    2007-01-01

    In emission tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations, imprecise diagnosis, impairing of fusion with other modalities, etc. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested, which lead to improvements over the spatial activity distribution in lungs lesions, but which have the disadvantages of requiring additional instrumentation or the need of discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion compensation directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the maximum likelihood expectation maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data

  3. A Novel Kernel-Based Regularization Technique for PET Image Reconstruction

    Directory of Open Access Journals (Sweden)

    Abdelwahhab Boudjelal

    2017-06-01

    Full Text Available Positron emission tomography (PET is an imaging technique that generates 3D detail of physiological processes at the cellular level. The technique requires a radioactive tracer, which decays and releases a positron that collides with an electron; consequently, annihilation photons are emitted, which can be measured. The purpose of PET is to use the measurement of photons to reconstruct the distribution of radioisotopes in the body. Currently, PET is undergoing a revamp, with advancements in data measurement instruments and the computing methods used to create the images. These computer methods are required to solve the inverse problem of “image reconstruction from projection”. This paper proposes a novel kernel-based regularization technique for maximum-likelihood expectation-maximization ( κ -MLEM to reconstruct the image. Compared to standard MLEM, the proposed algorithm is more robust and is more effective in removing background noise, whilst preserving the edges; this suppresses image artifacts, such as out-of-focus slice blur.

  4. A New Track Reconstruction Algorithm suitable for Parallel Processing based on Hit Triplets and Broken Lines

    Directory of Open Access Journals (Sweden)

    Schöning André

    2016-01-01

    Full Text Available Track reconstruction in high track multiplicity environments at current and future high rate particle physics experiments is a big challenge and very time consuming. The search for track seeds and the fitting of track candidates are usually the most time consuming steps in the track reconstruction. Here, a new and fast track reconstruction method based on hit triplets is proposed which exploits a three-dimensional fit model including multiple scattering and hit uncertainties from the very start, including the search for track seeds. The hit triplet based reconstruction method assumes a homogeneous magnetic field which allows to give an analytical solutions for the triplet fit result. This method is highly parallelizable, needs fewer operations than other standard track reconstruction methods and is therefore ideal for the implementation on parallel computing architectures. The proposed track reconstruction algorithm has been studied in the context of the Mu3e-experiment and a typical LHC experiment.

  5. Pediatric 320-row cardiac computed tomography using electrocardiogram-gated model-based full iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu [The University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan); Ino, Kenji [The University of Tokyo Hospital, Imaging Center, Tokyo (Japan); Torigoe, Rumiko [Toshiba Medical Systems, Tokyo (Japan)

    2017-10-15

    Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)

  6. Pediatric 320-row cardiac computed tomography using electrocardiogram-gated model-based full iterative reconstruction

    International Nuclear Information System (INIS)

    Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu; Ino, Kenji; Torigoe, Rumiko

    2017-01-01

    Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)

  7. ANN-Benchmarks: A Benchmarking Tool for Approximate Nearest Neighbor Algorithms

    DEFF Research Database (Denmark)

    Aumüller, Martin; Bernhardsson, Erik; Faithfull, Alexander

    2017-01-01

    This paper describes ANN-Benchmarks, a tool for evaluating the performance of in-memory approximate nearest neighbor algorithms. It provides a standard interface for measuring the performance and quality achieved by nearest neighbor algorithms on different standard data sets. It supports several...... visualise these as images, Open image in new window plots, and websites with interactive plots. ANN-Benchmarks aims to provide a constantly updated overview of the current state of the art of k-NN algorithms. In the short term, this overview allows users to choose the correct k-NN algorithm and parameters...... for their similarity search task; in the longer term, algorithm designers will be able to use this overview to test and refine automatic parameter tuning. The paper gives an overview of the system, evaluates the results of the benchmark, and points out directions for future work. Interestingly, very different...

  8. Holographic images reconstructed from GMR-based fringe pattern

    Directory of Open Access Journals (Sweden)

    Kikuchi Hiroshi

    2013-01-01

    Full Text Available We have developed a magneto-optical spatial light modulator (MOSLM using giant magneto-resistance (GMR structures for realizing a holographic three-dimensional (3D display. For practical applications, reconstructed image of hologram consisting of GMR structures should be investigated in order to study the feasibility of the MOSLM. In this study, we fabricated a hologram with GMR based fringe-pattern and demonstrated a reconstructed image. A fringe-pattern convolving a crossshaped image was calculated by a conventional binary computer generated hologram (CGH technique. The CGH-pattern has 2,048 × 2,048 with 5 μm pixel pitch. The GMR stack consists of a Tb-Fe-Co/CoFe pinned layer, a Ag spacer, a Gd-Fe free layer for light modulation, and a Ru capping layer, was deposited by dc-magnetron sputtering. The GMR hologram was formed using photo-lithography and Krion milling processes, followed by the deposition of a Tb-Fe-Co reference layer with large coercivity and the same Kerr-rotation angle compared to the free layer, and a lift-off process. The reconstructed image of the ON-state was clearly observed and successfully distinguished from the OFF-state by switching the magnetization direction of the free-layer with an external magnetic field. These results indicate the possibility of realizing a holographic 3D display by the MOSLM using the GMR structures.

  9. Light-flavor squark reconstruction at CLIC

    CERN Document Server

    AUTHOR|(SzGeCERN)548062; Weuste, Lars

    2015-01-01

    We present a simulation study of the prospects for the mass measurement of TeV-scale light- flavored right-handed squark at a 3 TeV e+e collider based on CLIC technology. The analysis is based on full GEANT4 simulations of the CLIC_ILD detector concept, including Standard Model physics backgrounds and beam-induced hadronic backgrounds from two- photon processes. The analysis serves as a generic benchmark for the reconstruction of highly energetic jets in events with substantial missing energy. Several jet finding algorithms were evaluated, with the longitudinally invariant kt algorithm showing a high degree of robustness towards beam-induced background while preserving the features typically found in algorithms developed for e+e- collisions. The presented study of the reconstruction of light-flavored squarks shows that for TeV-scale squark masses, sub-percent accuracy on the mass measurement can be achieved at CLIC.

  10. System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging Modalities

    Science.gov (United States)

    Guan, Huifeng

    In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the

  11. Three-dimension reconstruction based on spatial light modulator

    Energy Technology Data Exchange (ETDEWEB)

    Deng Xuejiao; Zhang Nanyang; Zeng Yanan; Yin Shiliang; Wang Weiyu, E-mail: daisydelring@yahoo.com.cn [Huazhong University of Science and Technology (China)

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  12. Application of ANN and fuzzy logic algorithms for streamflow ...

    Indian Academy of Sciences (India)

    The present study focusses on development of models using ANN and fuzzy logic (FL) algorithm for predicting the streamflow for catchment of Savitri River Basin. The input vector to these models were daily rainfall, mean daily evaporation, mean daily temperature and lag streamflow used. In the present study, 20 years ...

  13. 2016-2017 Travel Expense Reports for Mary Anne Chambers ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Beata Bialic

    Purpose: Board meetings. Date(s):. 2016-11-20 to 2016-11-23. Destination(s):. Ottawa. Airfare: $445.14. Other. Transportation: $29.05. Accommodation: $786.80. Meals and. Incidentals: $76.79. Other: $0.00. Total: $1,337.78. Comments: 2016-2017 Travel Expense Reports for Mary. Anne Chambers, Governor, Chairperson ...

  14. 2016-2017 Travel Expense Reports for Mary Anne Chambers ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Beata Bialic

    Date(s):. 2016-08-14 to 2016-08-23. Destination(s):. Peru/Colombia. Airfare: $3,484.87. Other. Transportation: $0.00. Accommodation: $1,942.21. Meals and. Incidentals: $395.27. Other: $75.50. Total: $5,897.85. Comments: 2016-2017 Travel Expense Reports for Mary. Anne Chambers, Governor, Chairperson of the.

  15. Aspekte van die outeursfunksie in Antjie Krog se Lady Anne (1989)

    OpenAIRE

    M. Crous

    2002-01-01

    Aspects of the author function in Antjie Krog’s Lady Anne (1989) The purpose of this essay is to investigate the Foucauldian notion of the so-called “author function” in Antjie Krog’s seventh volume of poetry, viz. Lady Anne (1989). It is an attempt to show how the notion of the death of the author (Barthes) links up with this theorisation of Foucault. Furthermore, it is also an attempt to indicate the characteristic features of the so-called “author function” in the late eighties in Afr...

  16. Louis Kahni mateeria, valguse ja energia arhitektuur = Louis Kahn's Architecture of Matter, Light and Energy / Anne Griswold Tyng ; tõlk. Tiina Randus

    Index Scriptorium Estoniae

    Tyng, Anne Griswold

    2007-01-01

    Louis Kahni betoonarhitektuurist (Weissi maja, 1947-1949), Yale'i kunstigaleriist (1951-1953), City Toweri projektist (1952-1958), Trentoni supelmajast (1954-1956), Salki instituudist (1959-1965, La Jolla, California), Kimbelli kunstimuuseumist (1968-1974), pealinnakompleksist Dhakas (1965-1974). Anne Griswold Tyng hakkas L. Kahni juures tööle 1945. a., tema algkooli projektist (1949-50), oma Philadelphia maja juurdeehitusest (1965-1968)

  17. Efficacy of Vancomycin-based Continuous Triple Antibiotic Irrigation in Immediate, Implant-based Breast Reconstruction

    Directory of Open Access Journals (Sweden)

    Lisa M. Hunsicker, MD, FACS

    2017-12-01

    Conclusions:. Continuous breast irrigation with a vancomycin-based triple antibiotic solution is a safe and effective accompaniment for immediate implant reconstruction. Use of intramuscular anesthetic injection for postoperative pain control allows the elastomeric infusion pump to be available for local tissue antibiotic irrigation.

  18. Energy-angle correlation correction algorithm for monochromatic computed tomography based on Thomson scattering X-ray source

    Science.gov (United States)

    Chi, Zhijun; Du, Yingchao; Huang, Wenhui; Tang, Chuanxiang

    2017-12-01

    The necessity for compact and relatively low cost x-ray sources with monochromaticity, continuous tunability of x-ray energy, high spatial coherence, straightforward polarization control, and high brightness has led to the rapid development of Thomson scattering x-ray sources. To meet the requirement of in-situ monochromatic computed tomography (CT) for large-scale and/or high-attenuation materials based on this type of x-ray source, there is an increasing demand for effective algorithms to correct the energy-angle correlation. In this paper, we take advantage of the parametrization of the x-ray attenuation coefficient to resolve this problem. The linear attenuation coefficient of a material can be decomposed into a linear combination of the energy-dependent photoelectric and Compton cross-sections in the keV energy regime without K-edge discontinuities, and the line integrals of the decomposition coefficients of the above two parts can be determined by performing two spectrally different measurements. After that, the line integral of the linear attenuation coefficient of an imaging object at a certain interested energy can be derived through the above parametrization formula, and monochromatic CT can be reconstructed at this energy using traditional reconstruction methods, e.g., filtered back projection or algebraic reconstruction technique. Not only can monochromatic CT be realized, but also the distributions of the effective atomic number and electron density of the imaging object can be retrieved at the expense of dual-energy CT scan. Simulation results validate our proposal and will be shown in this paper. Our results will further expand the scope of application for Thomson scattering x-ray sources.

  19. Mart ja Mari-Ann Susi taotlevad omanikena Concordia pankrotti / Andri Maimets

    Index Scriptorium Estoniae

    Maimets, Andri, 1979-

    2003-01-01

    Concordia Ülikooli rektor Mart Susi esitas kohtule avalduse, milles taotleb ülikooli pidanud Concordia Varahalduse OÜ pankroti väljakuulutamist. Vt. samas: Mari-Ann Susi õigustas ülikooli raha kasutamist

  20. Four-dimensional reconstruction of cultural heritage sites based on photogrammetry and clustering

    Science.gov (United States)

    Voulodimos, Athanasios; Doulamis, Nikolaos; Fritsch, Dieter; Makantasis, Konstantinos; Doulamis, Anastasios; Klein, Michael

    2017-01-01

    A system designed and developed for the three-dimensional (3-D) reconstruction of cultural heritage (CH) assets is presented. Two basic approaches are presented. The first one, resulting in an "approximate" 3-D model, uses images retrieved in online multimedia collections; it employs a clustering-based technique to perform content-based filtering and eliminate outliers that significantly reduce the performance of 3-D reconstruction frameworks. The second one is based on input image data acquired through terrestrial laser scanning, as well as close range and airborne photogrammetry; it follows a sophisticated multistep strategy, which leads to a "precise" 3-D model. Furthermore, the concept of change history maps is proposed to address the computational limitations involved in four-dimensional (4-D) modeling, i.e., capturing 3-D models of a CH landmark or site at different time instances. The system also comprises a presentation viewer, which manages the display of the multifaceted CH content collected and created. The described methods have been successfully applied and evaluated in challenging real-world scenarios, including the 4-D reconstruction of the historic Market Square of the German city of Calw in the context of the 4-D-CH-World EU project.