Loading Events

« All Events

  • This event has passed.

Thesis Defence: Parameter-Efficient Fine-Tuning for Software Engineering

March 27 at 1:00 pm - 5:00 pm

Amirreza Esmaeili, supervised by Dr. Fatemeh Fard, will defend their thesis titled “Parameter-Efficient Fine-Tuning for Software Engineering: A Systematic Study of Low-Resource and Multilingual Knowledge Transfer” in partial fulfillment of the requirements for the degree of Master of Science in Computer Science.

An abstract for Amirreza Esmaeili’s thesis is included below.

Defences are open to all members of the campus community as well as the general public. Registration is not required for in-person defences.

Abstract

Large Language Models (LLMs) have achieved strong performance on a wide range of software engineering tasks, yet adapting these models to new tasks and programming languages remains computationally expensive. Parameter Efficient Fine-Tuning (PEFT) methods address this challenge by updating only a small subset of parameters or introducing lightweight adaptation modules. However, there is limited empirical understanding of how different PEFT methods behave across model families, adaptation settings and programming language conditions, particularly for low-resource and unseen languages.

This thesis investigates the effectiveness of PEFT methods for software engineering through two complementary empirical studies. Study I examines PEFT in a monolingual adaptation setting, evaluating Compacter, LoRA, and IA3 on general-purpose and code-specific LLMs for code summarization and code generation, with a central focus on knowledge transfer to R, an unseen and low-resource programming language. The results show that LoRA consistently delivers the strongest overall performance, while Compacter achieves competitive accuracy with substantially lower computational and storage requirements.

Study II investigates PEFT in a multilingual adaptation setting, focusing on cross-language knowledge transfer for code translation on modern decoderonly Code-LLMs, including CodeLlama, DeepSeek-Coder, and Qwen2.5-Coder. Fusion-based approaches are compared against LoRA and Compacter. The results indicate that LoRA remains a consistently strong baseline, while integrating Compacter within fusion architectures substantially improves performance in multilingual code translation.

Overall, this thesis demonstrates that PEFT methods are effective mechanisms for facilitating knowledge transfer across monolingual and multilingual adaptation settings. Effective PEFT selection depends on the interaction between task characteristics, model capability, programming language properties, and resource constraints. These findings provide practical guidance for efficient and context-aware model adaptation in software engineering.

Details

Date:
March 27
Time:
1:00 pm - 5:00 pm

Venue

Additional Info

Room Number
ASC 301
Registration/RSVP Required
No
Event Type
Thesis Defence
Topic
Research and Innovation, Science, Technology and Engineering
Audiences
Alumni, Community and public, Faculty, Staff, Family friendly, Partners and Industry, Students, Postdoctoral Fellows and Research Associates