Research Lab

AURA Lab

Augmenting software development with generative AI — from automated documentation to energy-efficient model training.

Mission

The AURA Lab (Augmented Understanding and Reasoning for AI) investigates how generative AI can transform software engineering practice. Our research spans the full lifecycle of AI-assisted development: building models that write, explain, and repair code; understanding the quality and trustworthiness of AI-generated output; and pioneering sustainable approaches to model training that reduce the environmental footprint of AI for SE research.

We believe that the next generation of software tools will be deeply collaborative — developers and AI working together to produce higher-quality software faster. Our work aims to make that collaboration reliable, transparent, and accessible to every developer.

By the Numbers

📜
0
Publications
📚
0
Citations
🤝
0
Members
🏆
0
NSF CRII Award

Research Areas

Our research explores the intersection of generative AI and software engineering across four main thrusts:

🤖

Generative AI for Code

Training and evaluating large language models for code generation, completion, and transformation tasks. Studying how LLMs understand and produce source code across programming languages.

📄

Automated Documentation

Generating high-quality code summaries, commit messages, and API documentation using neural models. Bridging the gap between code and natural language for developer comprehension.

🌱

Energy-Efficient Model Training

Reducing the computational and environmental cost of training AI models for SE. Developing benchmarks and methodologies for sustainable AI research in software engineering.

🔍

AI Trustworthiness & Hallucinations

Investigating when and why AI models hallucinate in coding tasks. Developing evaluation frameworks and mitigation strategies for reliable AI-assisted development.

Key Publications

Selected papers from the AURA Lab that have shaped our research agenda:

On the Robustness of Code Generation Techniques: An Empirical Study on GitHub Copilot
MSR 2024 Distinguished Paper
Evaluating the Code Quality of AI-Assisted Code Generation Tools: An Empirical Study on GitHub Copilot, Amazon CodeWhisperer, and ChatGPT
ICPC 2024 Distinguished Paper
Using Transfer Learning for Code-Related Tasks
IEEE TSE 2023
Studying the Usage of Text-to-Text Transfer Transformer to Support Code-Related Tasks
ICSE 2021
An Empirical Study on Code Comment Completion
ICSE 2022
On the Energy Footprint of Software Engineering for AI-Based Systems
IEEE TSE 2024

Tools & Artifacts

Open-source tools and datasets produced by the AURA Lab:

Tool

CodeDoc

An automated documentation generation tool that produces high-quality code summaries and method-level comments using fine-tuned transformer models.

Benchmark

GreenAI-SE

An energy benchmarking framework for measuring the carbon footprint and computational cost of training AI models in software engineering research.

Dataset

T5 Code Tasks Corpus

A large-scale curated dataset for training and evaluating text-to-text transfer transformer models on code summarization, generation, and bug fixing tasks.

Tool

Copilot Robustness Suite

A testing framework for evaluating the robustness and reliability of AI code assistants under diverse prompt perturbations and edge cases.

Team

Meet the researchers behind the AURA Lab:

AM

Antonio Mastropaolo

Assistant Professor — PI
ZA

Zahidul Haque Alvi

Ph.D. Student
RA

Research Assistant

Ph.D. Student
UG

Undergraduate Researcher

Research Assistant

Explore More