LinkedIn’s algorithm changes have sparked controversy and accusations of bias, with numerous users reporting significant drops in engagement. A recent experiment, dubbed #WearthePants, highlighted concerns that the platform’s new system, powered by Large Language Models (LLMs), may be inadvertently disadvantaging women. The experiment and subsequent discussion raise questions about fairness and transparency in social media algorithms and their impact on professional networking.
The issues began surfacing in recent months, following LinkedIn’s implementation of LLMs designed to improve content relevance. Users noticed a decline in impressions and interactions on their posts, prompting investigation into the potential causes. The #WearthePants experiment, where women altered their profiles to appear male, aimed to test the hypothesis that gender played a role in content visibility.
Is LinkedIn’s Algorithm Biased Against Women?
The experiment gained traction after product strategist Michelle (a pseudonym) shared her experience of switching her profile gender to male and observing a substantial increase in post reach. Marilynn Joyner, a founder, reported a 238% jump in impressions after making the same change. Similar results were documented by Megan Cornish, Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, and Felicity Menzies, among others.
LinkedIn has firmly denied that its algorithm uses demographic information like gender to determine content visibility. In a statement, the company asserted that its AI systems do not employ such signals and that variations in feed reach are not necessarily indicative of bias. However, experts in social algorithms suggest the situation is more complex.
Brandeis Marshall, a data ethics consultant, explained that social media platforms operate through “an intricate symphony of algorithms” that consider numerous factors. She emphasized that simply changing a profile photo and name is only one variable, and the algorithm is also influenced by user interaction patterns and content preferences. Determining the precise weighting of these factors remains a challenge.
The Origins of #WearthePants
The #WearthePants initiative was initially launched by entrepreneurs Cindy Gallop and Jane Evans. They asked male colleagues to post identical content to their own accounts to compare performance. Gallop’s post reached 801 people, while her male counterpart’s post reached over 10,000, despite a significantly smaller follower count. This disparity fueled the broader experiment and concerns about algorithmic bias.
Joyner expressed a desire for LinkedIn to take accountability for any potential bias within its system. However, the inner workings of LLM-driven algorithms are largely opaque, making it difficult to pinpoint the exact reasons for observed changes in content visibility.
Marshall pointed out that LLMs are often trained on datasets that reflect existing societal biases, potentially leading to skewed outcomes. These biases can be subtle and implicit, rather than overt discrimination. Researchers have consistently found evidence of such biases in various LLM models.
Sarah Dean, an assistant professor of computer science at Cornell, noted that LinkedIn’s algorithm likely considers entire user profiles, including job history and engagement patterns, when determining content promotion. This means that demographics could indirectly influence both what users see and whose content is amplified.
Understanding the New LinkedIn Algorithm
LinkedIn maintains that its AI systems analyze hundreds of signals to personalize the user feed, including profile information, network connections, and activity. The company claims to be continuously testing and refining its algorithm to ensure relevant content reaches the right audience. They also reported a 15% year-over-year increase in posting and a 24% increase in comments, indicating increased competition for visibility.
According to LinkedIn, content focusing on professional insights, career advice, industry news, and educational topics is currently performing well. Sales expert Chad Johnson suggested the new system prioritizes “understanding, clarity, and value” in writing, potentially de-emphasizing simple likes and reposts.
However, the changes have left many users frustrated and confused. Shailvi Wakhlu, a data science consultant, reported a significant decline in impressions despite consistently posting high-quality content. Others have observed similar drops in engagement, leading to concerns about the platform’s effectiveness for content creators.
Marshall believes that posts related to her experiences as a Black woman may receive less engagement than other content, suggesting a potential bias related to race and gender. While anecdotal, this observation highlights the complexities of algorithmic fairness and the potential for unintended consequences.
The core issue appears to be a lack of transparency regarding how LinkedIn’s algorithm operates. Without a clear understanding of the factors influencing content visibility, it’s difficult for users to optimize their posts or assess whether bias is at play. The platform’s reluctance to reveal details about its system is common among social media companies, as transparency could enable users to manipulate the algorithm.
What’s Next for LinkedIn and its Users?
LinkedIn has stated its commitment to ongoing testing and refinement of its algorithm to improve content relevance and user experience. The company is also focused on addressing the increased competition for visibility resulting from platform growth. However, the concerns raised by the #WearthePants experiment and other users underscore the need for greater scrutiny and accountability in the development and deployment of AI-powered social media systems.
The debate surrounding LinkedIn’s algorithm is likely to continue, with users closely monitoring its performance and advocating for greater transparency. Future developments will depend on LinkedIn’s willingness to address these concerns and potentially adjust its system to mitigate any unintended biases. It remains to be seen whether the company will provide more detailed explanations of its algorithmic processes or continue to prioritize secrecy.

