Open Source vs. Closed Source: The Security Debate in the Age of AI
By Sam Saffron

AI Summary
In the rapidly evolving tech landscape, Cal.com has decided to close its codebase, citing AI's potential to exploit open-source vulnerabilities as a key reason. They argue that transparency has become a liability, as AI can now scan and exploit code at unprecedented speeds. While I understand their concerns, I firmly believe that closing source code is not the solution. At Discourse, we remain committed to open source, a stance we've held since our inception over a decade ago. Our code has always been open under the GPLv2 license, and we see no reason to change that.
## The Closed-Source Argument
Cal.com contends that keeping code open allows AI to exploit vulnerabilities faster than they can be patched. Indeed, AI has accelerated the discovery of security issues, as evidenced by our own use of AI tools like GPT-5.3 Codex and Claude Opus 4.6 to identify numerous security issues in Discourse. However, I argue that closing the codebase is a misguided approach. AI can find vulnerabilities without source code, working against compiled binaries and APIs. Closing the code reduces the number of defenders who can inspect and improve the system.
## The Power of Open Source
Open-source software like Linux thrives under constant scrutiny, attacked and defended by a global community. Transparency doesn't eliminate risk but enhances the defensive response. AI tools can now identify vulnerabilities in hours, but open-source code allows a broader range of defenders to use these tools. Our open-source approach has enabled us to fix security issues swiftly, leveraging contributions from a wide community of developers.
## Business vs. Security Decisions
Cal.com's decision seems driven more by business pressures than security concerns. Open-source code can be a competitive disadvantage, as competitors can easily access your architecture. Governance challenges also arise, with open-source communities often pushing back against decisions. These are valid business considerations, but framing them as security issues undermines the open-source ecosystem.
## AI and Security Practices
At Discourse, we employ the latest AI vulnerability scanners to identify and patch security issues before they can be exploited. Our AI scanning process is thorough, validating vulnerabilities with real-world tests. The cost of these scans is currently low, thanks to subsidies from companies like OpenAI and Anthropic. This allows us to maintain a robust security posture while keeping our code open.
## The Case for Open Source
Open-source codebases, like biological immune systems, become stronger through exposure to threats. Vulnerabilities that are found and patched make the software more resilient. While closed source may offer temporary obscurity, it is not a sustainable defense. Code leaks, binaries are reverse-engineered, and APIs are mapped. The real defense lies in building robust software and operational practices that withstand scrutiny.
## Commitment to Open Source
Discourse owes its existence to open source, built on technologies like Ruby, Rails, and Linux. We believe in giving back to the community that has supported us. Closing our code would betray the principles that have guided us for over a decade. We remain committed to open source, confident that it enhances our security and benefits our community.
Key Concepts
Open source refers to software with source code that anyone can inspect, modify, and enhance. It promotes transparency and community collaboration.
AI-driven vulnerability discovery uses artificial intelligence to identify security weaknesses in software, often at a much faster rate than traditional methods.
Category
SecurityMore on Discover
Summarized by Mente
Save any article, video, or tweet. AI summarizes it, finds connections, and creates your to-do list.
Start free, no credit card