Browsing by Subject "Machine-learning"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Automated segmentation and pathology detection in ophthalmic images(2014-07) Roy Chowdhury, SohiniComputer-aided medical diagnostic system design is an emerging inter-disciplinary technology that assists medical practitioners for providing quick and accurate diagnosis and prognosis of pathology. Since manual assessments of digital medical images can be both ambiguous and time-consuming, computer-aided image analysis and evaluation systems can be beneficial for baseline diagnosis, screening and disease prioritization tasks. This thesis presents automated algorithms for detecting ophthalmic pathologies pertaining to the human retina that may lead to acquired blindness in the absence of timely diagnosis and treatment. Multi-modal automated segmentation and detection algorithms for diabetic manifestations such as Diabetic Retinopathy and Diabetic Macular Edema are presented. Also, segmentation algorithms are presented that can be useful for automated detection of Glaucoma, Macular Degeneration and Vein Occlusions. These algorithms are robust to normal and pathological images and incur low computationally complexity.First, we present a novel blood vessel segmentation algorithm using fundus images that extracts the major blood vessels by applying high-pass filtering and morphological transforms followed by addition of fine vessel pixels that are classified by a Gaussian Mixture Model (GMM) classifier. The proposed algorithm achieves more than 95% vessel segmentation accuracy on three publicly available data sets. Next, we present an iterative blood vessel segmentation algorithm that initially estimates the major blood vessels, followed by iterative addition of fine blood vessel segments till a novel stopping criterion terminates the iterative vessel addition process. This iterative algorithm is specifically robust to thresholds since it achieves 95.35% vessel segmentation accuracy with 0.9638 area under ROC curve (AUC) on abnormal retinal images from the publicly available STARE data set.We propose a novel rule-based automated optic disc (OD) segmentation algorithm that detects the OD boundary and the location of vessel origin (VO) pixel. This algorithm initially detects OD candidate regions at the intersection of the bright regions and the blood vessels in a fundus image subjected to certain structural constraints, followed by the estimation of a best fit ellipse around the convex hull that combines all the detected OD candidate regions. The centroid of the blood vessels within the segmented OD boundary is detected as the VO pixel location. The proposed algorithm results in an average of 80% overlap score on images from five public data sets.We present a novel computer-aided screening system (DREAM) that analyzes fundus images with varying illumination and fields of view, and generates a severity grade for non-proliferative diabetic retinopathy (NPDR) using machine learning. Initially, the blood vessel regions and the OD region are detected and masked as the fundus image background. Abnormal foreground regions corresponding to bright and red retinopathy lesions are then detected. A novel two-step hierarchical classification approach is proposed where the non-lesions or false positives are rejected in the first step. In the second step, the bright lesions are classified as hard exudates and cotton wool spots, and the red lesions are classified as hemorrhages and micro-aneurysms. Finally, the number of lesions detected per image is combined to generate a severity grade. The DReAM system achieves 100% sensitivity, 53.16% specificity and 0.904 AUC on a publicly available MESSIDOR data set with 1200 images. Additionally, we propose algorithms that detect post-operative laser scars and fibrosed tissues and neovascularization in fundus images. The proposed algorithm achieves 94.74% sensitivity and 92.11% specificity for screening normal images in the STARE data set from the images with proliferative diabetic retinopathy (PDR). Finally, we present a novel automated system that segments six sub-retinal thickness maps from optical coherence tomography (OCT) image stacks of healthy patients and patients with diabetic macular edema (DME). First, each image in the OCT stack is denoised using a Wiener Deconvolution algorithm that estimates the speckle noise variance using a Fourier-domain based structural error. Next, the denoised images are subjected to an iterative multi-resolution high-pass filtering algorithm that detects seven sub-retinal surfaces in six iterative steps. The thicknesses of each sub-retinal layer for all scans from a particular OCT stack are then combined to generate sub-retinal thickness maps. Using the proposed system the average inner sub-retinal layer thickness in abnormal images is estimated as 275 um (r = 0.92) with an average error of 9.3 um, while the average thickness of the outer segments in abnormal images is estimated as 57.4 um (r = 0.74) with an average error of 3.5 um. Further analysis of the thickness maps from abnormal OCT image stacks demonstrates irregular plateau regions in the inner nuclear layer (INL) and outer nuclear layer (ONL), whose area can be estimated with r = 0.99 by the proposed segmentation system.Item Enhancing the Performance of Mobile Video Streaming Ecosystems(2022-12) Shehata, EmanRecent years have witnessed a rapid increase in video streaming services (e.g., Netflix, YouTube, Amazon Video, ... etc) to meet users' interests as a result of the massive content published by content providers, high-speed Internet, the wide use of social networks, along with the growth in smart mobile devices. Additionally, the recent deployment of commercial 5G in 2019 and its potential for ultra-high bandwidth has enabled a new era for bandwidth-intensive networked applications such as volumetric video streaming. This growth in available content and demand places a significant burden on the Internet infrastructure. In addition to the complex structure of videos as each video is encoded in multiple resolutions, and different bitrate quality levels to support diverse end-user devices and network conditions. Thus, large-scale content providers have resorted to employing one or more content distribution networks (CDNs) to cache video content and handle user requests, as well as resorting to edge computing and machine learning to improve the performance perceived by their end users. Poor performance impacts user engagement, which leads to significant revenue loss for content providers. In this thesis, we discuss crucial research problems to improve the performance of mobile video streaming ecosystems to meet the scalability and user QoE performance requirements. First, we study the performance of intermediate caches in a hierarchical cache network. We show that when cache servers at different layers act independently this leads to caching objects which are evicted before their next request arrives leading to cache under-utilization.To overcome this issue, we proposed "BIG" cache abstraction which deals with distributed cache pieces as if they are "glued" together to form one "virtual" "BIG" cache. Thus, allowing any existing caching strategy to be applied as a single consistent policy for this "BIG" Cache. Consequently, "BIG" cache improves object hit probability, thereby minimizing the origin server load, and network bandwidth. Second, object access patterns are frequently changing due to the frequent changes in object popularity due to its diurnal access pattern, and during its life span. Due to these frequent changes, caching algorithms cannot rely on the locally observed object access patterns for making caching decisions. On the other hand, manually tuning the caching algorithm for each cache server according to the changes in the request access patterns is very expensive and is not scalable. To address this issue, we developed a machine-learning LSTM Encoder-Decoder model for content popularity prediction. Our DEEPCACHE is a self-adaptive caching framework for making end-to-end caching decisions based on the predicted popularity. We show that it manages to increase the number of cache hits for existing caching policies. Third, routing is a central problem to ensure the resiliency of CDNs. Purely distributed routing algorithms such as Bellman-Ford suffer from the "count-to-infinity" problem, whereas Dijkstra's algorithm requires global topology dissemination and route recomputation. Much of the recent literature on resilient routing is resilient to k link/node failures for a constant k (and often placing topological constraints on the graphs), and none of them work under arbitrary link failures. To address this issue, we developed a proactive routing algorithm that ensures the connectivity between any pair of nodes under arbitrary failures without the need for global topology dissemination and route recomputation as in purely distributed routing algorithms. Our algorithm limits the number of nodes involved in the recovery process as well as the number of link reversals, and convergence time. An additional advantage is the ability to utilize multiple paths to send traffic between nodes due to utilizing directed edges between nodes even upon failures. Finally, with the recent deployment of commercial 5G in 2019 and its potential for ultra-high bandwidth, we studied the characteristics of 5G throughput and its impact on video streaming applications. Our findings show that the wild fluctuations in 5G throughput and its dead zones lead to a large stall time while streaming videos. We redesigned video streaming applications to be 5G-Aware taking full advantage of the ultra-high bandwidth and overcoming its varying throughput. Our experiments show that our proposed strategies consistently deliver high video quality close to the theoretical optimal results reducing (if not eliminating) the stall time.