> Attaullah Baig, who served as head of security for WhatsApp from 2021 to 2025, claims that approximately 1,500 engineers had unrestricted access to user data without proper oversight, potentially violating a US government order that imposed a $5bn penalty on the company in 2020.
If it results in a new billion-dollar penalty, maybe it would've saved money to move him quietly to a cushy rest-and-vest advisory position, in which he's not allowed to see, do, or say anything.
> In his whistleblower complaint, Baig is requesting reinstatement, [...]
I don't understand the "reinstatement" part. Does he actually want to go back, and think that it wouldn't be a toxic dynamic?
(He already talked about retaliation. And then by going public the way he did, I'd think he burned that bridge, salted the earth for a mile around bridge, and then nuked the entire metro area from orbit.)
Or is "reinstatement" simply something the lawyers just have to ask for, to ostensibly make him whole, but they actually neither want nor expect that?
> Or is "reinstatement" simply something the lawyers just have to ask for, to ostensibly make him whole, but they actually neither want nor expect that?
“Reinstatement” is usually a legal formality in whistleblower cases: lawyers ask for it because the law says the remedy for retaliation is to make the employee whole, and it strengthens the case even if nobody expects it to happen. In reality, returning to the job is almost never feasible, so the request mostly serves as leverage for a financial settlement.
Dont whistlebowers get a percentage cut of the fine?
> In the United States, whistleblowers typically receive a percentage of the money collected by the government, ranging from 10% to 30% of fines or penalties.
That's rather surprising about the accessing user data bit. When I was at Meta, the quickest way to get fired as an engineer was to access user data/accounts without permission or business reason. Everything was logged/audited down to the database level. Can't imagine that changing and the rules are taught very early on in the onboarding/bootcamp process.
I haven’t touched a lot of these cyber security parts of industry: especially policies for awhile…
… but I do recall that auditing was a stronger motivator than preventing. There were policies around checking the audit logs, not being able to alter audit logs and ensuring that nobody really knew exactly what was audited. (Except for a handful of individuals of course.)
I could be wrong, but “observe and report” felt like it was the strongest possible security guarantee available inside the policies we followed (PCI-DSS Tier 1). and that prevention was a nice to have on top.
As a customer I'm angry that businesses get to use "hope and pray" as their primary data protection measure without being forced to disclose it. "Motivators" only work on people who value their job more than the data they can access and I don't believe there's any organization on this planet where this is true for 100% of the employees, 100% of the time.
That strategy doesn't help a victim who's being stalked by an employee, who can use your system to find their new home address. They often don't care if they get fired (or worse), so the motivator doesn't work because they aren't behaving rationally to begin with.
This really isn’t fair. It is not simply hope and pray: it is a clearly stated/enforced deterrent that anyone who violates the policy will be terminated. You lose your income and seriously harm your future career prospects. This is more or less the same policy that governments hold to bad actors (crime happens but perpetrators will be punished).
I get that it is best to avoid the possibility of such incidents but it is not always practical and a strong punishment mechanism is a reasonable policy in these cases.
You don't think it's fair to expect a trillion-dollar business to implement effective technical measures to stop rogue (or hacked!) employees from accessing personal information about their users?
I'm not talking about small businesses here, but large corporations that have more than enough resources to do better than just auditing.
> crime happens but perpetrators will be punished
Societies can't prevent crime without draconian measures that stifle all of our freedoms to an extreme degree. Corporations can easily put barriers in place that make it much more difficult (or impossible) to gain unauthorized access to customer information. The entire system is under their control.
Facebook/Meta has shown time and time again that it can't be trusted with data privacy, full stop.
No amount of internal auditing, externally verified and stamped with approval for following ISO standards theater will change the fact that as a company it has firebombed each and every bridge that was ever available to it, in my book.
If the data has the potential to be misused, that is enough for me to equate it as not secure for use.
To the extent a random person's evidence on the Internet amounts to proof:
From people at Facebook circa 2018, I know that end user privacy was addressed at multiple checkpoints -- onboarding, the UI of all systems that could theoretically access PII, war stories about senior people being fired due to them marginally misunderstanding the policy, etc.
Note that these friends did not belong to WhatsApp, which was at that time a rather separate suborg.
Whatever Meta says publicly about this topic, and whatever its internal policies may be, directly contradicts its behavior. So any attempt to excuse this is nothing but virtue signalling and marketing.
The privacy violations and complete disregard for user data are too numerous to mention. There's a Wikipedia article that summarizes the ones we publicly know about.
Based on incentives alone, when the company's primary business model is exploiting user data, it's easy to see these events as simple side effects. When the CEO considers users of his products to be "dumb fucks", that culture can only permeate throughout the companies he runs.
There’s a meaningful difference in a company wanting to exploit user data to enrich itself and allowing employees to engage in voyeurism. The latter doesn’t make the company money, and therefore can be penalised at no cost.
Your comment talks about incentives, but you haven’t actually made a rational argument tying actual incentives to behaviour.
There is actually no difference, only a difference in intent.
The problem is similar to that of government efforts to ban encryption: if you have a backdoor, everyone has a backdoor.
If Meta is collecting huge amount of user info like candy (they are) and using it for business purposes (they are), then necessarily those employees implementing those business purposes can do that, too.
You can make them pinky promise not to. That doesn't do anything.
Meta has a similar problem with stalking via Ring camera. You allow and store live feeds of every Ring camera? News flash: your employees can, too! They're gonna use that to violate your customers!
My point is that it would be naive to believe that a company whose revenue depends on exploiting user data has internal measures in place to ensure the safe handling of that data. In fact, their actions over the years effectively prove that to not be the case.
So whatever they claim publicly, and probably to their low-level employees, is just marketing to cover their asses and minimize the impact to their bottom line.
What would be the cost of setting safeguards and firing employees that cross the line? Feel like an access control system would be fairly easy to build and firing employees is not a huge deal nowadays.
You claim it’s all talk, but it’s not much more effort to walk the walk. It doesn’t hurt profits to do it.
Everything is logged, but no one really cares, and the "business reasons" are many and extremely generic.
That being said, maybe I'm dumb but I guess I don't see the huge risk here? I could certainly believe that 1500 employees had basically complete access with little oversight (logging and not caring isn't oversight imo). But how is that a safety risk to users? User information is often very important in the day to day work of certain engineering orgs (esp. the large number of eng who are fixing things based off user reports). So that access exists, what's the security risk? That employees will abuse that access? That's always going to be possible I think?
Yes but an employee will always be able to do those things because some employees, even a large number of some employees, need access to user accounts and data for legitimate reasons, and since the only workable way is to track and punish later (cannot run the company if every user access needs human approval at the moment), it's always a risk
Given how WhatsApp is the de-facto way to communicate outside of the West and China, these security/data-handling "weaknesses" are most likely a feature, not a bug. An absolute bonanza for the certain intelligence services.
Remember, kids: End to end encryption is useless if the "ends" are fully controlled by an (untrustworthy) third party.
Yeah, huge in Latin America in the sense that a lot (most?) business only have a number that they use with Whatsapp (you can't call or even text them). Is it the same in Europe? Since I am from Latin America I never know if people from other continents use Whatsapp as much as we do, and if when I ask them to use Whatsapp I am imposing a new app or it's what they regularly use.
No. Here in Germany WhatsApp is not even that widespread for businesses. But WA is very big here for personal communication, though Signal comes in second (at least amongst older people, and amongst my circle)
I think Europe is not homogenous enough for this, but in the Netherlands at least, there are plenty of companies that you can't call, email or text, but they'll have some other options: a chatbot, a web form, maybe a Twitter account, and sometimes via WhatsApp indeed.
It's definitely not the world's messaging market. For instance in Japan and many places in SEA, Line is the standard messenger - one many people probably haven't even heard of. Though it does have a nice play on words - are you on Line?
It’s not uncommon. Orkut back in the day was wildly popular in Latin America and India. WhatsApp is the same. I think users in NA have a lot of high quality options as against those in Asia and LatAm who don’t have much reliable options other than ones developed in NA.
You can get an android phone for about one tenth of what a new iPhone costs. That’s why android dominates lower income markets. Apple decided they just don’t want to be there.
I’m not sure that’s true. I’m fairly certain UK, France, AU, Canada WhatsApp is not vastly more popular than the blue bubble alternative. At least I believe this was the case a few years ago, based on data I’d seen.
> Blue bubble isn't really a thing ever mentioned in France either, not enough iPhone market share.
Nobody uses iMessage. People with iPhone use WhatsApp too.
The user experience of iMessage used to be subpar and now everyone has WhatsApp installed anyway, the feature set is the same and it works on all phone brands so nobody feels like switching.
Same in the UK. The fact that iMessage only works for iOS devices means it's a complete non-starter. What's the point in using a messaging app if you can't add all your contacts to a group? And if you're using a different app for group chats for this reason, then why not use it for 1-1 messaging, too?
I guess that it’s the iPhone’s messenger app? I heard that in that app, fellow iOS users have blue bubble messages and Android / other users have green bubble messages, and all the teens in the US /maybe Canada think it’s lame if you don’t have blue bubbles.
Oh. I remember hearing about that about 15y ago, didn't realise it was still a thing. I suppose because I haven't heard of anyone using iMessage for almost as long!
> According to the 115-page complaint, Baig discovered through
> internal security testing that WhatsApp engineers could “move
> or steal user data” including contact information, IP addresses
> and profile photos “without detection or audit trail”.
That isn't really the breach you're making it out to be. Profile photos, unless made private/contacts only, are already publicly visible, and so is "contact information".
Of course these are useful to intelligence services, but this doesn't mean that Baig found they don't have true end-to-end encryption.
I can't tell if I'm being paranoid or just realistic, when I suspect that FBI/Apple fights over decrypting/unlocking iPhones or iMessage are just part of Apple's security theater.
If I were Evil-Tim-Cook, I'd have a deal with the FBI (and other agencies) where I'd hand over some user's data, in return for them keeping that secret and occasionally very publicly taking Apple to court demanding they expose a specific user and intentionally losing - to bolster Apple's privacy reputation.
> If I were Evil-Tim-Cook, I'd have a deal with the FBI (and other agencies) where I'd hand over some user's data, in return for them keeping that secret and occasionally very publicly taking Apple to court demanding they expose a specific user and intentionally losing - to bolster Apple's privacy reputation.
The FBI wants its investigations to go to court and lead to convictions. Any evidence gained in this way would be exposed as coming form Apple; notwithstanding parallel construction:
It's possible for it to be a facade, but also real.
Apple is a part of PRISM so there's approximately a 100% chance that anything you send to Apple via message, cloud, or whatever else, gets sent onto the NSA and consequently any agency that wants it. But the entire mass data collection they are doing is probably unconstitutional and thus illegal. But anytime it gets challenged in courts it gets thrown out on a lack of standing - nobody can prove it was used against them, so they don't have the legal standing to sue.
And the reason this is, is because its usage is never acknowledged in court. Instead there is parallel construction. [1] For instance imagine the NSA finds out somebody is e.g. muling some drugs. They tip off the police and then the police find the car in question and create some reason to pull it over - perhaps it was 'driving recklessly.' They coincidentally find the cache of drugs after doing a search of the car because the driver was 'behaving erratically', and then this 'coincidence' is how the evidence is introduced into court.
----
So getting back to Apple they probably want to have their cake and eat it too. By giving the NSA et al all they want behind the scenes they maintain those positive relations (and compensatory $$$ from the government), but then by genuinely fighting its normalization (which would allow it to be directly introduced) in court, they implicitly lie to their users that they're keeping their data protected. So it's this sort of strange thing where it's a facade, but simultaneously also real.
> the entire mass data collection they are doing is probably unconstitutional and thus illegal. But anytime it gets challenged in courts it gets thrown out on a lack of standing
It's kind of wild that this is the part of the deep state MAGA just forgot about.
Maybe. I think they'd have a hard time keeping that under wraps—governments aren't typically very careful (and the FBI is about as careful as a bull in a china shop) about not showing their hand when it comes to charging people. If you're strict about keeping certain info on certain channels, smart observers would notice if someone were snooping.
For instance, if someone shared something incriminating in a group chat and got arrested, and that info was only shared in the group chat, they'd have to silence everyone in that group chat to ensure that the channel still seemed secure. I don't think at least our government is that competent or careful.
But also, people wayyyy overhype how much apple tries to come off as privacy-forward. They sell ads and don't even allow you to deny apps access to the internet, and for the most part their phone security seems more focused on denying you control over your own phone rather than denying a third party access to it. I think they just don't want the hassle of complying with warrants. Stuff like pegasus would only be so easy to sell if you couldn't lean on the company to gain access, and I think it'd be difficult for hundreds of countries to conspire to obscure legal pressure. Finally Apple generally has little to gain from reading your data, unlike other tech giants with perverse incentives.
Of course this is all speculation, but I do trust imessages much more than I trust anything coming out of meta, and most of what comes out of google.
> someone shared something incriminating in a group chat and got arrested, and that info was only shared in the group chat
“Only” is doing an incredible amount of work there.
Unless you concoct something incriminating solely for the purpose of testing this, the something incriminating being discussed in group chat previously happened in the real world. Ripples of information were created there and can be found (parallel construction).
Right, but parallel construction only works if opsec fails. Good luck with repeating that feat forever. You clearly have far more faith in the FBI than I do. Now repeat this feat for every dumbass in intelligence in every country.
My position doesn’t require a lot of faith in the FBI.
If they fail in parallel construction, they always have the option to continue. For the vast majority of cases where opsec isn't 100% foolproof, we hear about them. For the few cases where it was foolproof, we just don't hear about them.
> For instance, if someone shared something incriminating in a group chat and got arrested, and that info was only shared in the group chat, they'd have to silence everyone in that group chat to ensure that the channel still seemed secure.
Corrupt investigators can use parallel construction to pretend that the key breakthrough in the case was actually something legal.
iMessage backups in the cloud are subject to warrants. Even if you don't use iCloud backups, can you be sure everyone you communicate with also abstains?
right, the ability to recover implies keys exist outside the device. even if they gossip keys to other devices you control, there are lots of people with only a single apple device.
I think Signal is the safest choice. If you want to be absolutely sure, host your own service, and hope you know how to make it have airtight security.
Makes you wonder if Meta got one or more of those secret national security letters, or foreign equivalents.
Also makes me wonder about Google's change wrt android security patches - under the guise of "making it easier for OEMs" by moving to quarterly is actually just so that Paragon and other nation state spyware has access to the vulnerabilities for at least 4 months before they get patched.
1) leave quietly and tell no one: con - no one on HN gets to talk about it. The next person needing money does it anyway.
2) leave loudly when you're still poor: con - you get blacklisted from tech and die from a preventable disease working at a gas station without insurance. The company implements the policy anyway.
3) leave loudly when your rich: con - people accuse you of selling out the users.
I have consistently told recruiters from Meta to leave me alone. It is a company that has knowingly done massive harm to our culture and our children, and I have no interest in ever working with or for them.
This further surprised Mr. Baig, as WhatsApp, which is known for its strong security brand externally, had such a small security team of just 6 engineers, and they were all only working on this tiny aspect of application security. All the other teams in WhatsApp were well staffed. The engineering team had about 1200 engineers. In addition, there were about 100 product managers, about 100 product designers, nearly 200 data scientists, etc. WhatsApp overall had about 3000 employees.
“Are we going to be in the same situation as Mudge at Twitter?”
WhatsApp is way beyond just texting and calling, it is basically global infrastructure now, used daily by governments, NGOs, and billions. This is not a startup screw-up, it's a public utility gone seriously messed up. Heads need to roll. Stop playing god. Secure the platform or step aside.
> had such a small security team of just 6 engineers
≥ Company refused to allocate more than around 10 engineers to the Security team at any point
If true, this tells the story here with security culture at WhatsApp. Assuming a backlog of known weaknesses (as any established code base will have), and the velocity that 100 PMs and 1200 SWEs implies, how would you do anything as a security team besides stick your fingers in the figurative holes in the dike? The ensuing conflict between Baig and his superiors about not fixing stuff is surely going to result in an assessment of "poor performance" but is likely just Baig giving a f** about user data.
"He also claimed the company failed to remedy the hacking and takeover of more than 100,000 accounts each day, ignoring his pleas and proposed fixes and choosing instead to prioritize user growth."
There is no oversight of these monstrosities of any sort. I doubt anyone would have issues with the thesis that Meta would implement anything that might curb their user numbers unless it was mandated.
Why would they? They are beholden to their shareholders first. If it isn't illegal then it isn't illegal, immoral perhaps but that is not illegal, unless it is illegal.
My learned friends are going to have to really get their bowling arms warmed up for this sort of skit. For starters, you need a victim ... err complainant.
If you haven't already: Signal is the strongest independent e2e encrypted consumer app that is driven by a non-profit organisation using a zero knowledge approach.
Didn't Hacker News feature an article on their home page at some point (10 years ago?) that at that time Facebook misconfigured something and users could observe their data being fed directly to some Israeli intelligence company? That was the day I deleted my FB account and never looked at anything they offer anymore.
At this point it’s best to assume that everything you communicate is being collected in some way.
There are very, very few apps I really trust. E.g. the only mechanism I trust for communicating passwords securely is GPG, I wouldn’t even use Signal for that.
Unless you owner of the app and what they are doing exactly you can’t trust anyone. You don’t know what they are going through or if they sold the app to someone or had a certain code implementation that leaks all of your data.
I stopped using Chrome when I had clear evidence of it leaking data - urls visited.
No, it was something else but I can't find it via HN search anymore. I think it was in 2013-2014, which is timeframe when I deleted my FB account (that for some reason kept living for many years as I was told).
Onavo Protect, the VPN client from the data-security app maker acquired by Facebook back in 2013, has now popped up in the Facebook iOS app itself, under the banner “Protect” in the navigation menu. Clicking through on “Protect” will redirect Facebook users to the “Onavo Protect – VPN Security” app’s listing on the App Store.
Unsurprising given it’s been an open secret for over a decade that Meta employees will (if you have the right contacts or amount of money), orchestrate banning or seizing long-standing active accounts with desirable usernames and giving them to their friends or the highest bidder.
A related scheme is the existence of brokers who will, for a fee, recover banned or locked accounts. User pays the broker $X, broker pays their contact at Meta $Y, and using internal tooling suddenly a ban or suspension that would normally put someone in an endless loop of automated vague bullshit responses gets restored.
> In his whistleblower complaint, Baig is requesting reinstatement, back pay and compensatory damages, along with potential regulatory enforcement action against the company.
If the company is so bad (it is), why does he want back?!
'Just pay me the salaries I "missed", and keep them coming.' The regulatory action is just "potential".
Companies are not relationships where once they're your ex they are never worth interacting with ever again. If you are doing good work and then HR pushes you out, then it is reasonable to sue the company to get them to pay you damages and then go back to doing what you were before with the protection that they won't do it again.
The point I tried to make was not that he should be resentful about being kicked out, but that he doesn't really care that Meta is unethical and endangers billions.
Even if nothing changes (the regulatory action is optional), he's happy to contribute (he insists, in fact). Even among people who don't want him there.
The points you’re making are personal attacks about the whistleblower. They don’t focus on the substance of the accusations (insecurity). Instead, they focus on your idea of their career motivations and their personality.
Yes, it isn't stated because that point is moot until he is awarded remedy.
You don't negotiate with what you don't have yet. But the idea that he or they would actually want to resume working together is beyond unlikely. They will be happy to pay for him to go away, if that's the only way they can legally get rid of him.
When it comes to e2e encryption it's important for the ends to be static (not web apps) and auditable (open source, reproducible builds) because the software running on the ends can trivially compromise anything going trough either of them. It can be as simple as a script being loaded from the server into a runtime such as Lua (closed source app). Or custom javascript delivered (web app).
When these conditions aren't met, any e2e encryption claim can be dismissed out of hand. This does not mean the service offers no value, it just means it cannot be trusted to keep anything confidential.
I've seen some people right here on HN say that Whatsapp was an inspired acquisition and Zuck is a great product guy, knows what to buy and who to hire
> A Meta spokesperson, Andy Stone, wrote on Threads, the company’s text-based social network: “Sadly this is a familiar playbook in which a former employee is dismissed for poor performance and then goes public with distorted claims that misrepresent the ongoing hard work of our team.”
Skeletons keep piling up while PR try to dismiss them
Corporate communications has playbook damage control responses, and this quote seems to be suggesting that the quoted response is one of them (it's "familiar").
Whether "former employees" are sketchily operating from playbooks, who knows. Because PR playbook-sounding statements don't have a lot of credibility.
Or the PR team undermines their own credibility with a stock and specious fact-free non-response.
I think the point of these is to dodge the even guiltier look of “no comment”. And signal there won’t be any potentially costly cooperative engagement from their side to their shareholders.
I hate Meta as much as the next person, but it feels like "endangering billions of users" is exagerating here. The complaint is pretty much that WhatsApp engineers can access metadata (NOT the content of the messages).
This said, WhatsApp is not open source, so it's impossible for users to verify how the encryption works, so users have to trust that it's properly end-to-end encrypted.
If you care about privacy (and you should), then you should use Signal instead of WhatsApp.
The metadata of someone's communications can be almost as damning as the content. I would guess that if the FBI could merely have a list of who their suspect contacted over an app, and when, they'd have 90% of what they wanted.
My understanding is that in the vast majority of investigations law enforcement will be satisfied in learning only who you're talking to, i.e. "just metadata" is fine, and dangerous.
It seems reasonable. Even those who are sloppy with their opsec probably do not detail the entirety of the plan via digital mechanisms. Being able to identify likely collaborators is probably sufficient to infer some specifics of an activity.
> I would guess that if the FBI could merely have a list of who their suspect contacted over an app, and when
Well with WhatsApp they most definitely can, but it has never been a secret. WhatsApp always had access to the metadata (whereas Signal makes a lot of effort to reduce the metadata they have access to). In ~2016 WhatsApp integrated the Signal protocol to add end-to-end encryption, but did nothing about the metadata.
> The complaint is pretty much that WhatsApp engineers can access metadata (NOT the content of the messages).
I don't even take this statement at face value. It's trivially easy to include models on client side that can do some message classification and treat that as "metadata" that would give insight into the content of the message.
Agreed. As I recall the way notifications work on Signal/WhatsApp is the app receives some silent notification that wakes it up, then the app does its crypto thing, and then it locally triggers the notification with the decrypted content you see. In iOS land your app needs a special entitlement to work this way. It also means if you're on very heavy group chats your battery will drain faster.
If WhatsApp central servers could push a notification to your phone that contained your actual message content, it couldn't be E2EE.
Fair point. For E2EE messaging apps, metadata often includes encrypted message content. As others have stated, the unencrypted metadata (eg. message recipient) can be potentially be damning enough on its own.
Wasn’t using Whatsapp that got a bunch of people droned by Israel? You should just assume your metadata at the very least is getting leaked to all US friendly intelligence agencies if you are using a US based service.
Seems just in line with all the other Meta Scandals: from providing a platform for genocide in Myanmar, harming the psychology of 100s of millions of teenagers (Instagram) to pushing extremist and fascists content while receiving big ad cash dollars for propaganda that lifts criminals and fascist politicians into the highest offices. Meta has no red lines, as long as it lines Zuckerberg's pockets.
> WhatsApp engineers could “move or steal user data” including contact information, IP addresses and profile photos “without detection or audit trail”.
I never trusted fecebook which is why I never created an account or used any of its products (old Instagram placeholder only), except last year, I made a small startup and wanted to use Instagram to promote it. Despite using the other old account to avoid potential false flagging as spam, immediately after creating it I got banned and had to submit a personal picture holding a book or whatever to verify I am real. I did that although it's not a personal account. Regardless, a few seconds after submitting the picture and verifying my number it got permanently banned. So far this is understandable, maybe it's all an automated process which is expected. However, I wanted to get in touch with support, in any form or shape, only to find out that there's none, and apparently the only way to actually fix something within fecebook is knowing someone who knows someone who works there. LOL, really big LOL!! A company that size operating like an underground syndicate is a total joke and totally untrustworthy.
Bottom line: Never trust anything from fecebook, no matter what they say, do not.
Gang, who should we believe: a rando with 10 karma points who acts like he knows it all without any evidence or one of the last remaining journalistic institutions?
My man, Meta were caught torrenting/pirate books to train the garbage that is llama. Meta enabled a couple of genocides including the one in Myanmar. Meta suppressed reports on children safety (Washington Post probably is also activist journalism, right? https://www.washingtonpost.com/investigations/2025/09/08/met...).
We are not surprised at all that s company that has been consistently evil, is evil again.
Facebook doesn't give me a straight answer, when I ask them questions about their policies, even when my questions aren't answered by their policies. The job of the privacy team within Facebook is not privacy: it's reducing liability.
Obviously not: if I had, I'd have inside contacts I could ask, instead of having to bother their public relations people to beg for scraps of intel about what they're doing with my information, while they act
I don't believe they've lied to me – I'm not so uncharitable as to assume their incorrect "it's written in the policy!" claims were deliberate lies –, but they're certainly not forthcoming.
From enabling genocide in Myanmar, to interfering with elections, to giving user data to third parties in violation of its own daya policies, to straight up weird stuff like pirating/torrening books to train their steaming pile of garbage called llama, to having sex chatbots be weird to children.
And then there is the even weirder decisions of zuck, the biggest loser of all:
- VR didnt seem to catch on
- the metaverse is a giant smelly pile of poo and he sunk millions in it
- he is hiring AI engineers at absurd money in a rapidly cooling bubble market
- he immediately started ass kissing the orange stain that calls himself president
Is he purposefully trying to be a caricature cartoon vilain, a grotesque loser, and his company an emblem of evil? Or is it just cluelessness?
>the metaverse is a giant smelly pile of poo and he sunk millions in it
He sunk tens of billions.
Estimates (because we don't have "Reality Labs" broken out before 2019) put Zuck's Metaverse Misadventure & Boondoggle about $75B in the hole ($10B revenue on $85B spend) with no signs of a turnaround in revenue.
There are plans to turn things around with AR spectacles but decent ones are years off and will require entirely new investment with little re-use of that $75B Metaverse nonsense (Oculus acquisition, 5 generations of Quest R&D, Horizon Worlds, partnered and sponsored games and content, etc.)
The only real ROI will be the experience and staff gained. The rest will almost certainly land in the dustbin.
They managed to tap in to a seemingly unlimited ocean of uninformed useful idiots, paid shills, bots and psychopaths. Its how you get rich in social media.
If a company can become a mega company by having billions of users, small violations must be treated as larger. At some point impacting millions/hundreds of millions of human years a year by your practices rises above the 'it's a small inconvenience to a single person'.
You are costing society/humanity millions/hundred of millions of human years. That is not a 'small inconvenience' at this scale.
> Messages are e2e and WA doesn't have access to them. We're talking about the metadata here.
You're still just blindly trusting this is the case. You can't verify the encryption or any of the code.
It would be trivial to actually encrypt the message and send it out and then store an unecrypted version locally and quietly exfiltrate it later.
They have to already be storing an unecrypted version locally, because you can see the messages. So unless your analyzing packets on the scale of months or years, you cannot possibly know that it isn't being exfiltrate at some point.
Take it a step further: put the extiltration behind a flag, and then when the NSA asks, turn on the flag for that person. Security researchers will never find it.
We don't really know that messages really are end-to-end encrypted though, do we? Is there a way to actually check that the messages in transit are encrypted in a way that only the other end can decrypt them? If not, we have to take Meta's word for it, which frankly doesn't carry much weight.
Not trivially. But with painstaking reverse engineering you could prove this. And people have, so you're not exclusively just taking Meta's word. The fact that Pegasus malware relied on remote code execution vuln to run malware on your phone to extract WhatsApp messages, really suggests that the E2EE works. If it wasn't E2EE, then the makers of Pegasus could have just intercepted traffic to get your messages.
Academics have also reverse engineered it as well, and though there are some weakness it's not a lie that WhatsApp is E2EE. Here's some I just found:
Eh, well painstaking reverse engineering is like having the source code, just 10000x more work. With that I feel like it should be possible to ensure this, or at least with some high level of confidence.
How can we call it "E2E encryption" in any meaningful sense of the term when the ends run proprietary code, and at least one of the ends has proven themselves unworthy of trust time and again.
Not sure this is correct - alaq said the messages are e2e, so not visible at all by anyone other that the participants of the conversation. The meta->data<- however IS visible by them and can and is likely to be used for advertising.
Of course the meta data is visible. Its probably more useful than the actual content of the conversation too. I mean from an ML perspective how would you even make features out of conversation that help with CTR ? That too without creeping the users out. I'd imagine its the same reason why meta doesnt (likely) listen in on mobile mics. Why go through the whole shebang of running always on transcription when simple features like who talked to who and at what times are more useful at establishing user similarities.
HN isn’t monolith, I personally never said WhatsApp is good, and I’m telling you from now avoid Signal too till they remove the phone number requirement AND you can deploy your own server.
This is unfortunately entirely seperate from that other article.
FTA:
> Attaullah Baig, who served as head of security for WhatsApp from 2021 to 2025, claims that approximately 1,500 engineers had unrestricted access to user data without proper oversight, potentially violating a US government order that imposed a $5bn penalty on the company in 2020.
Why ? You think Meta removed the privacy layers or put backdoors in place ? I mean if that's the suspicion, maybe we should read the terms of service and see if they actually guarantee E2E encryption
The way Zuckerberg tricked Acton and Koum is by itself enough for me not to trust Whatsapp. Even from a hypothetical "their encryption works but that's really scummy" perspective
It was bought as a power play, consolidation of tech power. Why would I trust them to do the right thing?
If it results in a new billion-dollar penalty, maybe it would've saved money to move him quietly to a cushy rest-and-vest advisory position, in which he's not allowed to see, do, or say anything.
> In his whistleblower complaint, Baig is requesting reinstatement, [...]
I don't understand the "reinstatement" part. Does he actually want to go back, and think that it wouldn't be a toxic dynamic?
(He already talked about retaliation. And then by going public the way he did, I'd think he burned that bridge, salted the earth for a mile around bridge, and then nuked the entire metro area from orbit.)
Or is "reinstatement" simply something the lawyers just have to ask for, to ostensibly make him whole, but they actually neither want nor expect that?
“Reinstatement” is usually a legal formality in whistleblower cases: lawyers ask for it because the law says the remedy for retaliation is to make the employee whole, and it strengthens the case even if nobody expects it to happen. In reality, returning to the job is almost never feasible, so the request mostly serves as leverage for a financial settlement.
> In the United States, whistleblowers typically receive a percentage of the money collected by the government, ranging from 10% to 30% of fines or penalties.
Maybe he's just laying a foundation for an upcoming legal dispute?
But until he is paid, his position is that he wants to be reinstated.
Personally it doesn't matter if there are auditing systems in place, if the data is readable in any way, shape or form.
I haven’t touched a lot of these cyber security parts of industry: especially policies for awhile…
… but I do recall that auditing was a stronger motivator than preventing. There were policies around checking the audit logs, not being able to alter audit logs and ensuring that nobody really knew exactly what was audited. (Except for a handful of individuals of course.)
I could be wrong, but “observe and report” felt like it was the strongest possible security guarantee available inside the policies we followed (PCI-DSS Tier 1). and that prevention was a nice to have on top.
That strategy doesn't help a victim who's being stalked by an employee, who can use your system to find their new home address. They often don't care if they get fired (or worse), so the motivator doesn't work because they aren't behaving rationally to begin with.
I'm not talking about small businesses here, but large corporations that have more than enough resources to do better than just auditing.
> crime happens but perpetrators will be punished
Societies can't prevent crime without draconian measures that stifle all of our freedoms to an extreme degree. Corporations can easily put barriers in place that make it much more difficult (or impossible) to gain unauthorized access to customer information. The entire system is under their control.
No amount of internal auditing, externally verified and stamped with approval for following ISO standards theater will change the fact that as a company it has firebombed each and every bridge that was ever available to it, in my book.
If the data has the potential to be misused, that is enough for me to equate it as not secure for use.
Different culture from the blue app, or whatever they call it?
From people at Facebook circa 2018, I know that end user privacy was addressed at multiple checkpoints -- onboarding, the UI of all systems that could theoretically access PII, war stories about senior people being fired due to them marginally misunderstanding the policy, etc.
Note that these friends did not belong to WhatsApp, which was at that time a rather separate suborg.
The privacy violations and complete disregard for user data are too numerous to mention. There's a Wikipedia article that summarizes the ones we publicly know about.
Based on incentives alone, when the company's primary business model is exploiting user data, it's easy to see these events as simple side effects. When the CEO considers users of his products to be "dumb fucks", that culture can only permeate throughout the companies he runs.
Your comment talks about incentives, but you haven’t actually made a rational argument tying actual incentives to behaviour.
The problem is similar to that of government efforts to ban encryption: if you have a backdoor, everyone has a backdoor.
If Meta is collecting huge amount of user info like candy (they are) and using it for business purposes (they are), then necessarily those employees implementing those business purposes can do that, too.
You can make them pinky promise not to. That doesn't do anything.
Meta has a similar problem with stalking via Ring camera. You allow and store live feeds of every Ring camera? News flash: your employees can, too! They're gonna use that to violate your customers!
So whatever they claim publicly, and probably to their low-level employees, is just marketing to cover their asses and minimize the impact to their bottom line.
You claim it’s all talk, but it’s not much more effort to walk the walk. It doesn’t hurt profits to do it.
That being said, maybe I'm dumb but I guess I don't see the huge risk here? I could certainly believe that 1500 employees had basically complete access with little oversight (logging and not caring isn't oversight imo). But how is that a safety risk to users? User information is often very important in the day to day work of certain engineering orgs (esp. the large number of eng who are fixing things based off user reports). So that access exists, what's the security risk? That employees will abuse that access? That's always going to be possible I think?
If you have a sister,imagine her being stalked by an employee?
If you have crypto, imagine an employee selling your information to a third party?
Remember, kids: End to end encryption is useless if the "ends" are fully controlled by an (untrustworthy) third party.
you probably mean outside of the USA, it's huge in Europe/UK
(which doesn't contradict your main point)
USA is special because it is the (only?) country where iPhone has more users than Android.
If you give someone your number, they’ll text you on WhatsApp.
Russia: Telegram
Taiwan: Line
Japan: Line
By contrast, WhatsApp is best known to me for being used in Europe, Australia, and India.
For business comms drop instagram and move WhatsApp to first.
For Singapore it seems LinkedIn messages are the go to IM for business.
Europe p2p: telegram number one by a huge margin, then WhatsApp. B2b: WhatsApp, period.
Blue bubble isn't really a thing ever mentioned in France either, not enough iPhone market share.
Nobody uses iMessage. People with iPhone use WhatsApp too.
The user experience of iMessage used to be subpar and now everyone has WhatsApp installed anyway, the feature set is the same and it works on all phone brands so nobody feels like switching.
> According to the 115-page complaint, Baig discovered through
> internal security testing that WhatsApp engineers could “move
> or steal user data” including contact information, IP addresses
> and profile photos “without detection or audit trail”.
That isn't really the breach you're making it out to be. Profile photos, unless made private/contacts only, are already publicly visible, and so is "contact information".
Of course these are useful to intelligence services, but this doesn't mean that Baig found they don't have true end-to-end encryption.
If I were Evil-Tim-Cook, I'd have a deal with the FBI (and other agencies) where I'd hand over some user's data, in return for them keeping that secret and occasionally very publicly taking Apple to court demanding they expose a specific user and intentionally losing - to bolster Apple's privacy reputation.
The FBI wants its investigations to go to court and lead to convictions. Any evidence gained in this way would be exposed as coming form Apple; notwithstanding parallel construction:
* https://en.wikipedia.org/wiki/Parallel_construction
As for other agencies, I'm sure many have exploits to attack these devices and get spyware on them, and so may not need Apple's assistance.
Apple is a part of PRISM so there's approximately a 100% chance that anything you send to Apple via message, cloud, or whatever else, gets sent onto the NSA and consequently any agency that wants it. But the entire mass data collection they are doing is probably unconstitutional and thus illegal. But anytime it gets challenged in courts it gets thrown out on a lack of standing - nobody can prove it was used against them, so they don't have the legal standing to sue.
And the reason this is, is because its usage is never acknowledged in court. Instead there is parallel construction. [1] For instance imagine the NSA finds out somebody is e.g. muling some drugs. They tip off the police and then the police find the car in question and create some reason to pull it over - perhaps it was 'driving recklessly.' They coincidentally find the cache of drugs after doing a search of the car because the driver was 'behaving erratically', and then this 'coincidence' is how the evidence is introduced into court.
----
So getting back to Apple they probably want to have their cake and eat it too. By giving the NSA et al all they want behind the scenes they maintain those positive relations (and compensatory $$$ from the government), but then by genuinely fighting its normalization (which would allow it to be directly introduced) in court, they implicitly lie to their users that they're keeping their data protected. So it's this sort of strange thing where it's a facade, but simultaneously also real.
[1] - https://en.wikipedia.org/wiki/Parallel_construction
It's kind of wild that this is the part of the deep state MAGA just forgot about.
For instance, if someone shared something incriminating in a group chat and got arrested, and that info was only shared in the group chat, they'd have to silence everyone in that group chat to ensure that the channel still seemed secure. I don't think at least our government is that competent or careful.
But also, people wayyyy overhype how much apple tries to come off as privacy-forward. They sell ads and don't even allow you to deny apps access to the internet, and for the most part their phone security seems more focused on denying you control over your own phone rather than denying a third party access to it. I think they just don't want the hassle of complying with warrants. Stuff like pegasus would only be so easy to sell if you couldn't lean on the company to gain access, and I think it'd be difficult for hundreds of countries to conspire to obscure legal pressure. Finally Apple generally has little to gain from reading your data, unlike other tech giants with perverse incentives.
Of course this is all speculation, but I do trust imessages much more than I trust anything coming out of meta, and most of what comes out of google.
“Only” is doing an incredible amount of work there.
Unless you concoct something incriminating solely for the purpose of testing this, the something incriminating being discussed in group chat previously happened in the real world. Ripples of information were created there and can be found (parallel construction).
If they fail in parallel construction, they always have the option to continue. For the vast majority of cases where opsec isn't 100% foolproof, we hear about them. For the few cases where it was foolproof, we just don't hear about them.
Corrupt investigators can use parallel construction to pretend that the key breakthrough in the case was actually something legal.
Clearly, you are underestimating the intelligence and capabilities of the US government. They have a lot of money. Like... A lot of money.
* Recovery Keys
* Recovery Contact (someone who holds your recovery key in key escrow)
YES!
Also makes me wonder about Google's change wrt android security patches - under the guise of "making it easier for OEMs" by moving to quarterly is actually just so that Paragon and other nation state spyware has access to the vulnerabilities for at least 4 months before they get patched.
1) leave quietly and tell no one: con - no one on HN gets to talk about it. The next person needing money does it anyway.
2) leave loudly when you're still poor: con - you get blacklisted from tech and die from a preventable disease working at a gas station without insurance. The company implements the policy anyway.
3) leave loudly when your rich: con - people accuse you of selling out the users.
4) Don't join Meta in the first place
I have consistently told recruiters from Meta to leave me alone. It is a company that has knowingly done massive harm to our culture and our children, and I have no interest in ever working with or for them.
As many holes as WhatsApp's "E2E" encryption has, this shows how valuable it still is. It's all metadata, not message content.
from here: https://www.courtlistener.com/docket/71293063/baig-v-meta-pl...
WhatsApp is way beyond just texting and calling, it is basically global infrastructure now, used daily by governments, NGOs, and billions. This is not a startup screw-up, it's a public utility gone seriously messed up. Heads need to roll. Stop playing god. Secure the platform or step aside.≥ Company refused to allocate more than around 10 engineers to the Security team at any point
If true, this tells the story here with security culture at WhatsApp. Assuming a backlog of known weaknesses (as any established code base will have), and the velocity that 100 PMs and 1200 SWEs implies, how would you do anything as a security team besides stick your fingers in the figurative holes in the dike? The ensuing conflict between Baig and his superiors about not fixing stuff is surely going to result in an assessment of "poor performance" but is likely just Baig giving a f** about user data.
There is no oversight of these monstrosities of any sort. I doubt anyone would have issues with the thesis that Meta would implement anything that might curb their user numbers unless it was mandated.
Why would they? They are beholden to their shareholders first. If it isn't illegal then it isn't illegal, immoral perhaps but that is not illegal, unless it is illegal.
My learned friends are going to have to really get their bowling arms warmed up for this sort of skit. For starters, you need a victim ... err complainant.
And not every CEO begins life in their company with "if you need any info just ask, they trust me, dumb fucks"
There are very, very few apps I really trust. E.g. the only mechanism I trust for communicating passwords securely is GPG, I wouldn’t even use Signal for that.
Onavo Protect, the VPN client from the data-security app maker acquired by Facebook back in 2013, has now popped up in the Facebook iOS app itself, under the banner “Protect” in the navigation menu. Clicking through on “Protect” will redirect Facebook users to the “Onavo Protect – VPN Security” app’s listing on the App Store.
https://techcrunch.com/2018/02/12/facebook-starts-pushing-it...
https://www.cnbc.com/amp/2022/11/17/meta-disciplined-or-fire...
A related scheme is the existence of brokers who will, for a fee, recover banned or locked accounts. User pays the broker $X, broker pays their contact at Meta $Y, and using internal tooling suddenly a ban or suspension that would normally put someone in an endless loop of automated vague bullshit responses gets restored.
If the company is so bad (it is), why does he want back?!
'Just pay me the salaries I "missed", and keep them coming.' The regulatory action is just "potential".
I have no sympathy for Meta, but this guy...
Even if nothing changes (the regulatory action is optional), he's happy to contribute (he insists, in fact). Even among people who don't want him there.
Any full remedy would require his position is reinstated.
If he wins the right to be reinstated, he will be happy to negotiate a payment instead. He is made whole.
What about any of that lacks sensible motives?
> he will be happy to negotiate a payment instead.
This, indeed, sounds way more normal than wanting to keep working for the evil company, and in a toxic environment.
It hasn't occurred to me that one can change their mind and choose a different compensation after the court decision like that.
You don't negotiate with what you don't have yet. But the idea that he or they would actually want to resume working together is beyond unlikely. They will be happy to pay for him to go away, if that's the only way they can legally get rid of him.
When these conditions aren't met, any e2e encryption claim can be dismissed out of hand. This does not mean the service offers no value, it just means it cannot be trusted to keep anything confidential.
Counterpoint: he's a monopolist and scummy person (https://news.ycombinator.com/item?id=1692122) who refuses to stop (https://arstechnica.com/tech-policy/2019/09/snapchat-reporte...) from the early days onwards (https://news.ycombinator.com/item?id=1169354)
https://news.ycombinator.com/item?id=15007454
Skeletons keep piling up while PR try to dismiss them
Corporate communications has playbook damage control responses, and this quote seems to be suggesting that the quoted response is one of them (it's "familiar").
Whether "former employees" are sketchily operating from playbooks, who knows. Because PR playbook-sounding statements don't have a lot of credibility.
Or the PR team undermines their own credibility with a stock and specious fact-free non-response.
I think the point of these is to dodge the even guiltier look of “no comment”. And signal there won’t be any potentially costly cooperative engagement from their side to their shareholders.
They don’t expect to be believed.
This said, WhatsApp is not open source, so it's impossible for users to verify how the encryption works, so users have to trust that it's properly end-to-end encrypted.
If you care about privacy (and you should), then you should use Signal instead of WhatsApp.
Well with WhatsApp they most definitely can, but it has never been a secret. WhatsApp always had access to the metadata (whereas Signal makes a lot of effort to reduce the metadata they have access to). In ~2016 WhatsApp integrated the Signal protocol to add end-to-end encryption, but did nothing about the metadata.
Again: if you care about privacy, use Signal.
I don't even take this statement at face value. It's trivially easy to include models on client side that can do some message classification and treat that as "metadata" that would give insight into the content of the message.
If an app sends the message content in clear through the notifications, then it is badly designed, period.
If WhatsApp central servers could push a notification to your phone that contained your actual message content, it couldn't be E2EE.
Complaint:
https://storage.courtlistener.com/recap/gov.uscourts.cand.45...
So not messages.
My man, Meta were caught torrenting/pirate books to train the garbage that is llama. Meta enabled a couple of genocides including the one in Myanmar. Meta suppressed reports on children safety (Washington Post probably is also activist journalism, right? https://www.washingtonpost.com/investigations/2025/09/08/met...).
We are not surprised at all that s company that has been consistently evil, is evil again.
That, or you have a vested interested in making sure that your stake in Meta does not depreciate in value.
I don't believe they've lied to me – I'm not so uncharitable as to assume their incorrect "it's written in the policy!" claims were deliberate lies –, but they're certainly not forthcoming.
From enabling genocide in Myanmar, to interfering with elections, to giving user data to third parties in violation of its own daya policies, to straight up weird stuff like pirating/torrening books to train their steaming pile of garbage called llama, to having sex chatbots be weird to children.
And then there is the even weirder decisions of zuck, the biggest loser of all:
- VR didnt seem to catch on
- the metaverse is a giant smelly pile of poo and he sunk millions in it
- he is hiring AI engineers at absurd money in a rapidly cooling bubble market
- he immediately started ass kissing the orange stain that calls himself president
Is he purposefully trying to be a caricature cartoon vilain, a grotesque loser, and his company an emblem of evil? Or is it just cluelessness?
He sunk tens of billions.
Estimates (because we don't have "Reality Labs" broken out before 2019) put Zuck's Metaverse Misadventure & Boondoggle about $75B in the hole ($10B revenue on $85B spend) with no signs of a turnaround in revenue.
There are plans to turn things around with AR spectacles but decent ones are years off and will require entirely new investment with little re-use of that $75B Metaverse nonsense (Oculus acquisition, 5 generations of Quest R&D, Horizon Worlds, partnered and sponsored games and content, etc.)
The only real ROI will be the experience and staff gained. The rest will almost certainly land in the dustbin.
You are costing society/humanity millions/hundred of millions of human years. That is not a 'small inconvenience' at this scale.
From the article: > including contact information, IP addresses and profile photos
I can confirm this, I used to work at WhatsApp.
You're still just blindly trusting this is the case. You can't verify the encryption or any of the code.
It would be trivial to actually encrypt the message and send it out and then store an unecrypted version locally and quietly exfiltrate it later.
They have to already be storing an unecrypted version locally, because you can see the messages. So unless your analyzing packets on the scale of months or years, you cannot possibly know that it isn't being exfiltrate at some point.
Take it a step further: put the extiltration behind a flag, and then when the NSA asks, turn on the flag for that person. Security researchers will never find it.
Academics have also reverse engineered it as well, and though there are some weakness it's not a lie that WhatsApp is E2EE. Here's some I just found:
- https://eprint.iacr.org/2025/794.pdf
- https://i.blackhat.com/USA-19/Wednesday/us-19-Zaikin-Reverse...
FTA:
> Attaullah Baig, who served as head of security for WhatsApp from 2021 to 2025, claims that approximately 1,500 engineers had unrestricted access to user data without proper oversight, potentially violating a US government order that imposed a $5bn penalty on the company in 2020.
I'm guessing there will be some tricky legal wording in their T&C that wouldn't rule them out from being an intermediate entity that can see messages.
It was bought as a power play, consolidation of tech power. Why would I trust them to do the right thing?