Concerns over data privacy are mounting as artificial intelligence becomes more integrated into daily life. A new project, privacy-focused AI service Confer, launched in December by Signal co-founder Moxie Marlinspike, offers an alternative to mainstream chatbot platforms by prioritizing user confidentiality. The service aims to provide a ChatGPT-like experience without the extensive data collection practices common among larger AI providers.
Confer is currently available to the public, with a free tier offering limited usage and a paid subscription for unlimited access. The project represents a growing movement towards more secure and user-controlled AI technologies, responding to anxieties about how personal information is utilized by tech companies. The launch comes as OpenAI, the creator of ChatGPT, begins testing advertising within its platform, further fueling privacy debates.
The Rise of Privacy-Focused AI
The increasing sophistication of AI chatbots has led to a unique level of personal disclosure. Marlinspike argues that these interfaces “actively invite confession,” gathering more intimate data than previous technologies. This data, when combined with advertising models, raises ethical concerns about manipulation and the potential for exploiting personal vulnerabilities.
However, building a truly private AI service is a complex undertaking. Confer tackles this challenge through a multi-layered security approach. The project’s core philosophy centers on preventing the service provider from accessing user conversations, thereby eliminating the risk of data misuse.
Technical Safeguards
Confer employs several key technologies to achieve this. First, messages are encrypted using the WebAuthn passkey system, enhancing security during transmission. The system works best on mobile devices and newer Mac computers, though workarounds exist for Windows and Linux users.
Additionally, all processing of user queries occurs within a Trusted Execution Environment (TEE) on the server side. This isolated environment, coupled with remote attestation systems, verifies the integrity of the system and prevents unauthorized access to data. Confer then utilizes a variety of open-weight foundation models to process the encrypted requests. This combination of encryption, secure processing, and open-source models is designed to ensure user privacy.
Cost and Accessibility
Confer operates on a tiered pricing structure. Users can access a free tier limited to 20 messages per day and five active chats. For those requiring more extensive use, a subscription costs $35 per month, granting unlimited access, more advanced models, and personalization options. This price point is significantly higher than ChatGPT Plus, reflecting the added cost of maintaining a data privacy infrastructure.
The higher cost is a deliberate choice, signaling that privacy is a premium feature. The project’s creators believe that users who prioritize confidentiality are willing to pay for a service that demonstrably protects their information. This contrasts with the freemium models of many other AI platforms, which often rely on data collection to offset costs.
Implications for the AI Landscape
Confer’s emergence highlights a growing demand for AI security and user control. While large language models (LLMs) continue to advance rapidly, concerns about data breaches and algorithmic bias remain prevalent. The project offers a potential blueprint for building AI services that prioritize ethical considerations and user rights.
The success of Confer could influence the broader AI industry, potentially prompting other providers to adopt more privacy-preserving practices. However, widespread adoption of such measures will likely require regulatory pressure and a shift in consumer expectations. The current landscape is dominated by companies that have built their business models on data collection, making a transition to privacy-focused approaches challenging.
Meanwhile, the open-source nature of Confer’s underlying technology could foster innovation and collaboration within the privacy-focused AI community. Developers can contribute to the project, enhancing its security and functionality. This collaborative approach contrasts with the closed-source development models of many commercial AI platforms.
Looking ahead, the project team plans to continue refining Confer’s security features and expanding its capabilities. A key focus will be improving the accessibility of the WebAuthn passkey system across different operating systems. The long-term viability of Confer will depend on its ability to attract a sufficient user base and demonstrate the effectiveness of its privacy protections. The next six to twelve months will be critical in determining whether this model for responsible AI can gain traction in a competitive market.

