Hack-Agents for Faster, Better Hackathons
A multi-agent system can support innovators differently during ideation, implementation, and evaluation in hackathons.
By Lu Li, Savindu Herath, and Cyrille Grumbach
Autumn 2025

Key takeaway: Using the same AI support throughout a hackathon can push teams toward familiar ideas too quickly. This project shows that phase-specific assistance can preserve exploratory ideation, support efficient implementation, and improve evaluation.
Reference for the paper: Li, L., Herath, S., & Grumbach, C. (2026, April). Hack-Agents: A Multi-Agent System for Innovation-Proof of Concept and Implications for AI-Augmented Hackathons. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/full/10.1145/3772363.3798678
Hackathons move fast. Teams have to generate ideas, build something that works, and evaluate what they created—all under tight time pressure. That is exactly why many innovators now turn to AI for help. These tools can improve efficiency and lower barriers to participation, but they can also create problems. When the same AI support is used in the same way across the whole innovation process, teams may rely on it too much and converge too early on familiar ideas.
This project asks a practical question: can AI support be designed to fit the different phases of innovation more effectively? The answer explored here is yes. Instead of giving innovators one single chatbot for the entire process, the project developed Hack-Agents, a multi-agent system for innovation that changes its role across ideation, implementation, and evaluation.
The system uses scaffolding at two levels. At the macro level, it structures the process into three phases: ideation, implementation, and evaluation. This helps innovators avoid rushing straight into building before stabilizing their problem understanding, and it creates space to reflect on outcomes and future improvements. At the micro level, each phase has its own agent. The Ideation Agent promotes divergent exploration by asking probing questions and surfacing alternative framings. The Implementation Agent provides more continuous support to help innovators resolve technical uncertainties and execute a selected approach. The Evaluation Agent helps innovators assess outcomes and trade-offs while also reflecting on possible improvements. The system was implemented and deployed in several innovation-focused hackathons, involving hundreds of innovators (Autumn 2025).
The results are highly relevant for anyone designing AI-augmented hackathons. They show that aligning AI assistance with the distinct phases of innovation can reduce over-reliance and premature convergence. In practice, that means teams can keep ideation more exploratory, move through implementation more efficiently, and approach evaluation in a more deliberate way. For organizers and designers of innovation processes, the message is clear: AI support works better when it changes with the work innovators are doing.
© 2026 Build2Gether. All rights reserved.