A BBC report reveals critical security vulnerabilities in Orchids, a popular AI-assisted coding platform. A cybersecurity researcher demonstrated that flaws in authentication and input handling could allow attackers to access projects, inject malicious code, or steal sensitive data. The platform’s design, which relies on natural language prompts and shared workspaces, increases the potential attack surface. Experts highlight the need for robust security testing and clear accountability in the rapidly evolving AI developer ecosystem, as AI tools lower coding barriers while simultaneously introducing new security risks due to inadequate safeguards.
Security flaws expose ‘vibe-coding’ AI platform Orchids to easy hacking
