Loading Events

« All Events

  • This event has passed.

Thesis Defence: A Comparative Study of Automatically Generated and Large Language Model-Generated Unit Tests for Piecewise Function Approximation Algorithms

December 15, 2025 at 10:00 am - 2:00 pm

Riya Manoj Kanabar, supervised by Dr. Yves Lucet, will defend their thesis titled “A Comparative Study of Automatically Generated and Large Language Model-Generated Unit Tests for Piecewise Function Approximation Algorithms” in partial fulfillment of the requirements for the degree of Master of Science in Computer Science.

An abstract for Riya Manoj Kanabar’s thesis is included below.

Defences are open to all members of the campus community as well as the general public. Please email yves.lucet@ubc.ca to receive the Zoom link for this defence.


Abstract

Large language models have emerged as promising tools for automated test generation, yet their effectiveness compared to systematic testing approaches remains empirically unvalidated for mathematical algorithms. This thesis investigates whether LLMs can generate unit tests as effective as systematic enumeration for piecewise function approximation algorithms—a fundamental class of algorithms requiring both tolerance satisfaction and segment minimization.

We develop a comparative testing framework evaluating seven state-of-the-art language models against systematic exhaustive testing. The framework employs provably optimal algorithms as ground-truth oracles, enabling definitive identification of algorithmic failures across both tolerance violations and suboptimal solutions. Systematic generation produces 75 billion test cases through bounded parameter space enumeration, while LLM-based generation utilizes varied prompting strategies across multiple contemporary models. GPU acceleration and JIT compilation achieve computational feasibility, reducing ten-billion-scale evaluation from years to hours. We evaluate 14 candidate algorithms spanning diverse paradigms across piecewise constant and piecewise linear approximation problems.

The comparative evaluation reveals substantial differences in failure detection effectiveness between systematic and LLM-based test generation. These findings establish empirical evidence regarding LLM capabilities and limitations for unit test generation in mathematical algorithm domains, informing practical decisions about appropriate deployment of AI-assisted testing methodologies.

Details

Date:
December 15, 2025
Time:
10:00 am - 2:00 pm

Additional Info

Registration/RSVP Required
Yes (see event description)
Event Type
Thesis Defence
Topic
Research and Innovation, Science, Technology and Engineering
Audiences
Alumni, Community and public, Faculty, Staff, Family friendly, Partners and Industry, Students, Postdoctoral Fellows and Research Associates