Browsing by Author "Chen, Jie"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Efficient VLSI Architectures for High-Speed Ethernet Transceivers(2008-08) Chen, JieThis thesis investigates efficient VLSI architectural design aspects of a digital signal processing (DSP) transceiver in high speed multi-pair wireline communication systems, such as 10 Gigabit Ethernet over copper (10GBASE-T), with the goal to reduce the hardware complexity and power consumption of various DSP components while maintaining the speed and performance requirements. The covered topics mainly include efficient far-end crosstalk (FEXT) cancellers, novel multi-input multi-output (MIMO) equalizers combined with Tomlinson-Harashima Precoding (THP), low complexity echo and near-end crosstalk (NEXT) cancellers. A novel feedforward delayed FEXT canceller in a THP based system is developed to remove FEXT as noise. Unlike conventional techniques on FEXT cancellation, the proposed FEXT canceller can mitigate the non-causal part of FEXT; thus it can achieve better cancellation performance. In addition, a modified design is developed by eliminating the feedback loops in the FEXT cancellers such that the resulting feedforward FEXT canceller is suitable for high speed applications. FEXT has been found to contain information about the symbols transmitted from remote transmitters and thus MIMO equalization technique is proposed to jointly process ISI and FEXT such that the useful information in FEXT can be utilized. It is shown that the proposed architecture overcomes the limitation of the traditional equalization schemes and can achieve a better system performance and lower hardware complexity. A computationally efficient approach for calculating the optimal tap coefficients of MIMO equalizers and cancellers is also proposed to speedup the computation. Furthermore, a practical equalization scheme which combines the MIMO equalization technique and TH precoding technique is proposed for the real application of high speed Ethernet systems. Different from existing work on MIMO equalization, the proposed scheme exactly complies with the current 10GBASE-T standard and can be easily pipelined for high speed implementation. Hardware complexity reduction schemes by utilizing the increased decision point SNR (DP-SNR) are also considered. Gigabit and multi-gigabit transceivers require very long adaptive filters for echo and NEXT cancellation. Implementation of these filters not only occupies large silicon area but also consumes significant power. This thesis considers the problems of designing cost-efficient echo and NEXT cancellers mainly from two different aspects: one is to reduce the number of taps used in these noise cancellers; the other is to reduce the word-length used to represent data in a VLSI system. First, the sparse characteristics of the echo and NEXT channel impulse responses is exploited to reduce computational cost of adaptive echo and NEXT cancellers. Second, a novel word-length reduction scheme is proposed by replacing the original input to the echo and NEXT cancellers with a finite-level signal, which is then recoded to have shorter word-length. To further reduce the complexity of these cancellers, an improved design is proposed by exploiting the property of the compensation signal. Compared with the traditional design, the proposed echo and NEXT cancellers have exact input and do not suffer from the quantization problem, and thus they are more suitable for VLSI implementation. The design issues of adaptive noise cancellers by using the proposed word-length reduction method are also considered and modified designs of the adaptive cancellers are developed to further reduce the overall hardware cost of echo and NEXT cancellers with acceptable cancellation performance. Finally, the design approach for stable pole-zero modeling of long finite impulse response (FIR) filters is proposed.Item Numerical linear algebra techniques for effective data analysis.(2010-09) Chen, JieData analysis is a process of inspecting and obtaining useful information from the data, with the goal of knowledge and scientific discovery. It brings together several disciplines in mathematics and computer science, including statistics, machine learning, database, data mining, and pattern recognition, to name just a few. A typical challenge with the current era of information technology is the availability of large volumes of data, together with ``the curse of dimensionality''. From the computational point of view, such a challenge urges efficient algorithms that can scale with the size and the dimension of the data. Numerical linear algebra lays a solid foundation for this task via its rich theory and elegant techniques. There are a large amount of examples which show that numerical linear algebra consists of a crucial ingredient in the process of data analysis. In this thesis, we elaborate on the above viewpoint via four problems, all of which have significant real-world applications. We propose efficient algorithms based on matrix techniques for solving each problem, with guaranteed low computational costs and high quality results. In the first scenario, a set of so called Lanczos vectors are used as an alternative to the principal eigenvectors/singular vectors in some processes of reducing the dimension of the data. The Lanczos vectors can be computed inexpensively, and they can be used to preserve the latent information of the data, resulting in a quality as good as by using eigenvectors/singular vectors. In the second scenario, we consider the construction of a nearest-neighbors graph. Under the framework of divide and conquer and via the use of the Lanczos procedure, two algorithms are designed with sub-quadratic (and close to linear) costs, way more efficient than existing practical algorithms when the data at hand are of very high dimension. In the third scenario, a matrix blocking algorithm for reordering and finding dense diagonal blocks of a sparse matrix is adapted, to identify dense subgraphs of a sparse graph, with broad applications in community detection for social, biological and information networks. Finally, in the fourth scenario, we visit the classical problem of sampling a very high dimensional Gaussian distribution in statistical data analysis. A technique of computing a function of a matrix times a vector is developed, to remedy traditional techniques (such as via the Cholesky factorization of the covariance matrix) that are limited to mega-dimensions in practice. The new technique has a potential for sampling Guassian distributions in tera/peta-dimensions, which is typically required for large-scale simulations and uncertainty quantifications./