Our Research and Impact
Our Research and Impact
Tackling real-world challenges to advance research while driving social impact and innovation
Open-Source Solutions
Innovators
SDG-focused solutions
raised
Featured on:



Downloads

To Be Announced: Our Project in Vietnam
By Chan Park and Xiaomeng Gu
To Be Announced: Our Project in Vietnam on innovation for individuals with disabilities

To be Announced: Our Project in Thailand
By Chan Park and Vivek Chaudhary
To be Announced: Our Project in Thailand on innovation for people with disabilities

To be Announced: Upcoming AI Hackathon in India
By Kim Claes, Cyrille Grumbach, and Savindu Herath
To be Announced: Upcoming Hackathon in India on UN Sustainable Development Goals with AI integration

To Be Announced: Moonshots or Safe Bets 2.0
By Cyrille Grumbach, Jackie Lane, and Georg von Krogh
Let's repeat and scale up our previous experiment: https://www.build2gether.foundation/projects/moonshots-or-safe-bets-how-ai-recommendations-shape-innovation-evaluation

Coming Soon: Insights from our Build2Gether AI Hackathon
By Cyrille Grumbach, Chan Park

Hack-Agents for Faster, Better Hackathons
By Lu Li, Savindu Herath, and Cyrille Grumbach
Key takeaway: Using the same AI support throughout a hackathon can push teams toward familiar ideas too quickly. This project shows that phase-specific assistance can preserve exploratory ideation, support efficient implementation, and improve evaluation. Reference for the paper: Li, L., Herath, S., & Grumbach, C. (2026, April). Hack-Agents: A Multi-Agent System for Innovation-Proof of Concept and Implications for AI-Augmented Hackathons. In Proceedings of the 2026 CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/full/10.1145/3772363.3798678 Hackathons move fast. Teams have to generate ideas, build something that works, and evaluate what they created—all under tight time pressure. That is exactly why many innovators now turn to AI for help. These tools can improve efficiency and lower barriers to participation, but they can also create problems. When the same AI support is used in the same way across the whole innovation process, teams may rely on it too much and converge too early on familiar ideas. This project asks a practical question: can AI support be designed to fit the different phases of innovation more effectively? The answer explored here is yes. Instead of giving innovators one single chatbot for the entire process, the project developed Hack-Agents, a multi-agent system for innovation that changes its role across ideation, implementation, and evaluation. The system uses scaffolding at two levels. At the macro level, it structures the process into three phases: ideation, implementation, and evaluation. This helps innovators avoid rushing straight into building before stabilizing their problem understanding, and it creates space to reflect on outcomes and future improvements. At the micro level, each phase has its own agent. The Ideation Agent promotes divergent exploration by asking probing questions and surfacing alternative framings. The Implementation Agent provides more continuous support to help innovators resolve technical uncertainties and execute a selected approach. The Evaluation Agent helps innovators assess outcomes and trade-offs while also reflecting on possible improvements. The system was implemented and deployed in several innovation-focused hackathons, involving hundreds of innovators (Autumn 2025). The results are highly relevant for anyone designing AI-augmented hackathons. They show that aligning AI assistance with the distinct phases of innovation can reduce over-reliance and premature convergence. In practice, that means teams can keep ideation more exploratory, move through implementation more efficiently, and approach evaluation in a more deliberate way. For organizers and designers of innovation processes, the message is clear: AI support works better when it changes with the work innovators are doing.

Moonshots or Safe Bets? How AI Recommendations Shape Innovation Evaluation
By Cyrille Grumbach, Jackie Lane, and Georg von Krogh
Key takeaway: Sequence matters when feasibility and novelty pull in different directions. If you want reliable incremental innovations, use feasibility-focused AI recommendations first and novelty-focused AI recommendations second; if you want moonshots, reverse the order. This project looks at a practical question that many organizations now face: when people evaluate many possible solutions, should they look at feasibility first or novelty first? How can AI most effectively assist evaluators when multiple criteria must be considered? That choice may sound small, but it can shape which ideas survive and which ones get filtered out. To study this, we partnered with Hackster.io, a leading crowdsourcing platform. We first launched an innovation contest that generated 132 open-source solutions. Then we run a series of preregistered field experiments with thousands of evaluators from Sairam Institutions (India) using a two-stage evaluation process. Evaluators were randomly assigned to one of two sequences: feasibility-then-novelty or novelty-then-feasibility. In both cases, AI recommendations helped guide evaluators during the evaluation process. The main finding is straightforward and highly relevant for practice. When evaluators started with feasibility, they were more likely to keep solutions that worked within existing constraints and then identify the more novel options within that set. This makes feasibility-then-novelty a strong choice when the goal is reliable incremental innovations: solutions that may be less novel, but are highly feasible. The main finding is clear and highly relevant for practice. When evaluators received feasibility-focused AI recommendations first, they were more likely to keep solutions that worked within existing constraints and then identify the more novel options within that set. This makes the feasibility-then-novelty sequence a strong choice when the goal is reliable incremental innovation: solutions that are highly feasible but not so novel. The reverse sequence produced a different outcome. When evaluators received novelty-focused AI recommendations first, they were more likely to keep unusual and atypical solutions early on, before feasibility was considered later. That sequence was better at surfacing more extreme possibilities. In practice, this makes novelty-then-feasibility the better option when the goal is moonshots: solutions that are highly novel but not feasible given existing constraints. The broader lesson is that organizations should not treat the order of AI recommendations as a minor design detail. The sequence matters when feasibility and novelty are in tension; it becomes a real strategic lever. Teams looking for dependable, near-term outcomes should begin with feasibility-focused AI recommendations. Teams looking for bold bets and future breakthroughs should begin with novelty-focused AI recommendations. Check out how our work was covered by Harvard Business School: https://d3.harvard.edu/how-ai-can-spot-your-next-billion-dollar-idea/ https://www.library.hbs.edu/working-knowledge/ai-trends-for-2026-building-change-fitness-and-balancing-trade-offs Thanks to all the participants of our experiments from Sairam Institutions.

How Problem Feedback Drives Innovation through Search Discontinuity
By Cyrille Grumbach, Chan Park, and Georg von Krogh
Key takeaway: When the broadcast problem is underspecified, the biggest breakthrough may come from challenging how solvers define the problem, not just how they improve the solution. Problem feedback pushed solvers toward new problem formulations and more innovative outcomes This project looks at a core challenge in external search: what happens when seekers broadcast a problem that cannot be clearly specified from the start? In many real settings, the problem is not neatly defined in advance. It appears through partial and ambiguous cues, which makes it hard to know what the “right” problem formulation is. That matters because solvers often build their search around the first formulation they come up with, even when that first formulation is too obvious. To study this, the project examined a six-month global crowdsourcing contest focused on the underspecified problem of disability. Solvers were asked to create innovative solutions that are useful and novel for people with disabilities. In the first half of the contest, each solver submitted a problem formulation and a corresponding conceptual solution. Then they were randomly assigned to one of two conditions. In the treatment condition, they received problem feedback on their problem formulation. In the control condition, they received solution feedback on their conceptual solution. In both conditions, the feedback came from individuals with disabilities before solvers moved on to implementation. The difference mattered. In the second half of the contest, solvers used the feedback to build tangible prototypes. Four seekers then evaluated each implemented solution based on novelty and usefulness. Across 114 implemented solutions, solvers who received problem feedback developed more innovative solutions than those who received solution feedback. The project shows why. Most solvers initially converged on obvious problem formulations. Solution feedback usually kept them working within those same formulations. They explored, but they stayed in the same solution spaces. This led to what the project calls continuous search: search that keeps moving forward from the initial problem formulation. As a result, many solutions remained close to existing market solutions. Problem feedback worked differently. It challenged solvers’ initial problem formulations and pushed them toward alternative formulations in previously unexplored parts of the problem space. That shift redirected their search toward different solution spaces. The project calls this discontinuous search. Instead of refining the same starting point, solvers changed the locus of search and moved toward solutions they would likely not have explored otherwise. For practice, the message is clear. When seekers face an underspecified problem, feedback on the solution may not be enough. If the goal is innovation, it may be more powerful to respond to how solvers formulate the problem in the first place. This project shows that in external search, better innovation can come from changing the problem formulation before improving the solution. All solutions developed through the contest are open source and have been downloaded more than 50,000 times worldwide, creating significant social impact for individuals with disabilities! More about the contest can be found on Hackster.io: https://www.hackster.io/contests/buildtogether2 Our contest has also been featured on ABC News and Forbes! Examples of solutions developed in our innovation contest.

When Sharing Experiences Fails to Drive Innovation
By Cyrille Grumbach, Chan Park, and Georg von Krogh
Key Takeaway: Tacit knowledge is not always better for innovation. In this project, innovators produced more innovative solutions when users shared explicit, codified information than when they shared tacit knowledge through personal experiences, although experience sharing helped users feel more socially integrated. This project examines a simple yet important question: when users face problems that are still poorly understood, what is the best way to help innovators solve them? The study focused on a global crowdsourcing contest on innovation for individuals with disabilities. In this setting, the users had firsthand knowledge of their problems, while the able-bodied innovators did not. The comparison was direct. In one condition, users shared explicit, codified information about their problems. In the other, they shared tacit knowledge through their personal experiences so innovators could learn from them. The expectation was that personal experiences would transfer more of the users’ tacit knowledge and lead to stronger innovation. But that is not what happened. Solutions developed when users shared explicit, codified information were rated by other users facing similar problems as more innovative than solutions developed when users shared personal experiences. Why? The study suggests that experience sharing may not work well when the knowledge is somatic, grounded in embodied experience, such as how users physically feel their problems. Rich personal experiences gave innovators a lot of information, but often too much. Many struggled to identify the central problem and got pulled toward secondary details. By contrast, innovators who received explicit, codified information were better able to focus on relevant problems and develop stronger solutions. For practice, this matters for anyone organizing open innovation around social issues. If the goal is stronger innovation, simply asking users to share personal experiences may not be enough. If the goal is social integration, however, experience sharing still has value, because users feel more socially integrated in that condition. The project shows that organizers may need to design their processes differently depending on whether they want to maximize innovation or strengthen social integration between users and innovators. All solutions developed through the contest are open-source and have been downloaded more than 250,000 times around the world, creating significant social impact for individuals with disabilities! More about the contest can be found on Hackster.io: https://www.hackster.io/contests/buildtogether Our contest has also been featured on ABC News, Forbes, and Make: magazine! Examples of solutions developed in our innovation contest:
© 2026 Build2Gether. All rights reserved.