Projects

Opportunistic Assessment of Bone Density in the Cervical Spine Using Dental Cone Beam Computed Tomography

For the past three years, I’ve conducted research under Dr. Chamith Rajapakse at the University of Pennsylvania, focusing on the use of dental Cone Beam Computed Tomography (CBCT) for opportunistic assessment of bone density. Our project explores whether routine dental CBCT scans—which are widely accessible, lower in cost, and expose patients to less radiation than traditional CT—can serve as a reliable screening tool for early indicators of osteoporosis. Specifically, we evaluate radiographic density patterns in the cervical spine (C3 vertebra), an anatomical site known to correlate strongly with systemic bone quality. To ensure a comprehensive analysis, our study extended beyond the cervical spine and also included tooth density and femur CT scans, allowing us to compare structural and radiodensity characteristics across multiple anatomical regions. This additional dataset gave us broader insight into how bone quality manifests across different skeletal structures and helped validate whether CBCT-derived metrics behave consistently across varied biological environments. Using a dataset of over 1,000 patient scans, we extracted voxel intensity features, calibrated density values across imaging modalities, and conducted large-scale statistical comparisons against DXA-based T-scores, the current clinical gold standard for osteoporosis diagnosis. Preliminary results show promising correlations between CBCT intensity patterns and DXA measurements, suggesting that CBCT could serve as an accessible, opportunistic screening method—especially for patients who regularly undergo dental imaging but may not receive dedicated bone density evaluations. This research has broader clinical implications: if dental CBCT scans can reliably flag low bone density, dentists and oral surgeons could play a critical role in identifying at-risk individuals earlier, enabling timely medical follow-up. In populations where osteoporosis often goes undiagnosed—such as underserved communities or individuals without regular primary care—this approach could dramatically improve early detection rates while adding no additional imaging burden. Our ongoing work focuses on refining segmentation methods, normalizing scan data for cross-machine compatibility, and developing machine-learning models to automate risk classification with greater precision.

Download My Paper (DOCX)
Teeth Segmentation Example
Project preview

Early Fault Detection in Endodontic Instruments Using Signal Processing and Machine Learning

For the past year, I've conducted research under Dr. Chandrasekhar Nataraj at Villanova University exploring my project, Early Fault Detection in Endodontic Instruments Using Signal Processing and Machine Learning. I developed a real-time system that identifies early signs of stress and microfractures in dental root-canal files. Using accelerometer and dynamometer sensors, I captured high-frequency vibration and force data from endodontic instruments and transformed the signals using Fourier and Wavelet analysis to uncover meaningful patterns in dominant frequency, energy distribution, and RMS behavior. I then extracted statistical features and trained machine learning models in MATLAB to distinguish between healthy and degrading tools with strong accuracy. My findings showed that irregular frequency magnitudes and unstable wavelet energy levels are reliable predictors of tool fatigue—often appearing well before a file visibly fails. This work demonstrates how engineering and AI can make dentistry safer by providing clinicians with early, data-driven warnings, with future potential for integration into smart dental handpieces and next-generation endodontic devices.

Elephancy

Elephancy is an immersive VR visualization tool designed to support individuals with Aphantasia—people who are unable to form mental images. The project transforms a simple text prompt into a fully rendered, multi-perspective visual scene by combining several layers of AI technology. First, an LLM expands the prompt into a detailed, structured scene description; next, Stable Diffusion generates a series of high-resolution images from different angles; and finally, these views are stitched into an interactive VR environment using Flutter, A-Frame, and Android native tooling. Within the headset, users can explore objects and scenes from multiple viewpoints, allowing them to experience visual concepts they cannot internally imagine. This creates an entirely new form of accessibility technology for Aphantasia, supporting memory, creativity, learning, and emotional visualization through immersive external imagery. The system was developed and showcased at PennApps 2024, one of the oldest and most prestigious collegiate hackathons, held at the University of Pennsylvania. Over the course of the event, the project was refined into a fully functional VR pipeline capable of generating, rendering, and displaying AI-driven scenes within minutes. Competing alongside hundreds of teams, Elephancy highlighted the potential of combining VR with generative AI to create new assistive technologies. By bridging cognitive psychology, computer vision, and VR engineering, Elephancy demonstrates how emerging AI tools can be used not just for entertainment or art—but to meaningfully expand human experience for those with neurological differences like Aphantasia.

Elephancy Example
Project preview

3D Anatomical Model from CT Scans

3D Anatomical Model from CT Scans is an advanced medical imaging project focused on converting standard clinical CT scans into fully segmented, high-fidelity 3D anatomical models for interactive visualization. Using 3D Slicer as the core imaging platform and integrating the deep-learning model TotalSegmentator, the pipeline automatically identifies and isolates dozens of anatomical structures—such as organs, bones, vasculature, and soft tissues—that typically require hours of manual delineation by radiologists or research technicians. The workflow begins by preprocessing raw CT data, normalizing voxel intensities, and feeding the scan into TotalSegmentator’s neural network to generate detailed segmentation maps. These maps are then refined, converted into 3D surface meshes, and optimized to preserve anatomical accuracy while remaining lightweight enough for real-time rendering. Once processed, the models are exported into AR/VR-compatible formats, enabling immersive exploration of patient-specific anatomy using augmented and virtual reality devices. This project significantly accelerates medical visualization workflows, transforming what used to be a time-consuming and technical task into an accessible, automated process. The resulting 3D models support a wide range of applications—including surgical planning, medical education, biomechanical research, and interactive patient communication. By bridging deep learning, biomedical imaging, and AR/VR technologies, this project demonstrates how computational tools can make complex anatomy more understandable, interactive, and useful across clinical and research environments.

High-Degree Polynomial Root Finder

High-Degree Polynomial Root Finder is a Java-based numerical analysis project designed to compute roots of high-degree polynomials that conventional calculators and basic algebraic methods cannot handle. Using the Newton–Raphson iterative method, the program dynamically evaluates polynomial derivatives, refines successive estimates, and converges rapidly toward accurate real roots. Users input custom polynomial coefficients, and the solver automatically constructs the polynomial function, applies iterative root-finding, and reports solutions with high numerical precision. To verify correctness, the results were cross-checked against manually computed approximations and graph-based analyses across multiple polynomial degrees—from low-order examples to complex, high-degree equations. The solver demonstrated consistent accuracy and efficiency, even as complexity increased, highlighting the power of numerical methods in computational mathematics. This project not only strengthens understanding of calculus-based root-finding algorithms, but also showcases how algorithmic thinking can fill gaps where standard calculators and symbolic solvers fall short. The work earned 1st place at the Delaware Valley Science Fair, the Air Products Young Innovators Award, and 1st place in Programming at the 2023 Regional Media and Design Competition.

Projectile Example