AURA Lab
Augmenting software development with generative AI — from automated documentation to energy-efficient model training.
Mission
We believe that the next generation of software tools will be deeply collaborative — developers and AI working together to produce higher-quality software faster. Our work aims to make that collaboration reliable, transparent, and accessible to every developer.
By the Numbers
Research Areas
Our research explores the intersection of generative AI and software engineering across four main thrusts:
Generative AI for Code
Training and evaluating large language models for code generation, completion, and transformation tasks. Studying how LLMs understand and produce source code across programming languages.
Automated Documentation
Generating high-quality code summaries, commit messages, and API documentation using neural models. Bridging the gap between code and natural language for developer comprehension.
Energy-Efficient Model Training
Reducing the computational and environmental cost of training AI models for SE. Developing benchmarks and methodologies for sustainable AI research in software engineering.
AI Trustworthiness & Hallucinations
Investigating when and why AI models hallucinate in coding tasks. Developing evaluation frameworks and mitigation strategies for reliable AI-assisted development.
Key Publications
Selected papers from the AURA Lab that have shaped our research agenda:
Tools & Artifacts
Open-source tools and datasets produced by the AURA Lab:
CodeDoc
An automated documentation generation tool that produces high-quality code summaries and method-level comments using fine-tuned transformer models.
GreenAI-SE
An energy benchmarking framework for measuring the carbon footprint and computational cost of training AI models in software engineering research.
T5 Code Tasks Corpus
A large-scale curated dataset for training and evaluating text-to-text transfer transformer models on code summarization, generation, and bug fixing tasks.
Copilot Robustness Suite
A testing framework for evaluating the robustness and reliability of AI code assistants under diverse prompt perturbations and edge cases.
Team
Meet the researchers behind the AURA Lab: