34679 2 days ago

I'd like to offer a cautionary tale that involves my experience after seeing this post.

First, I tried enabling o3 via OpenRouter since I have credits with them already. I was met with the following:

"OpenAI requires bringing your own API key to use o3 over the API. Set up here: https://openrouter.ai/settings/integrations"

So I decided I would buy some API credits with my OpenAI account. I ponied up $20 and started Aider with my new API key set and o3 as the model. I get the following after sending a request:

"litellm.NotFoundError: OpenAIException - Your organization must be verified to use the model `o3`. Please go to: https://platform.openai.com/settings/organization/general and click on Verify Organization. If you just verified, it can take up to 15 minutes for access to propagate."

At that point, the frustration was beginning to creep in. I returned to OpenAI and clicked on "Verify Organization". It turns out, "Verify Organization" actually means "Verify Personal Identity With Third Party" because I was given the following:

"To verify this organization, you’ll need to complete an identity check using our partner Persona."

Sigh I click "Start ID Check" and it opens a new tab for their "partner" Persona. The initial fine print says:

"By filling the checkbox below, you consent to Persona, OpenAI’s vendor, collecting, using, and utilizing its service providers to process your biometric information to verify your identity, identify fraud, and conduct quality assurance for Persona’s platform in accordance with its Privacy Policy and OpenAI’s privacy policy. Your biometric information will be stored for no more than 1 year."

OK, so now, we've gone from "I guess I'll give OpenAI a few bucks for API access" to "I need to verify my organization" to "There's no way in hell I'm agreeing to provide biometric data to a 3rd party I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder. How do I get my $20 back?"

26
leetrout 2 days ago

I actually contacted the California AG to get a refund from another AI company after they failed to refund me.

The AG office followed up and I got my refund. Worth my time to file because we should stop letting companies get away with this stuff where they show up with more requirements after paying.

Separately they also do not need my phone number after having my name, address and credit card.

Has anyone got info on why they are taking everyone’s phone number?

jazzyjackson 2 days ago

(having no insider info:) Because it can be used as a primary key ID across aggregated marketing databases including your voting history / party affiliation, income levels, personality and risk profiles etc etc etc. If a company wants to, and your data hygiene hasn't been tip top, your phone number is a pointer to a ton of intimate if not confidential data. Twitter was fined $150 million for asking for phone numbers under pretense of "protecting your account" or whatever but they actually used it for ad targeting.

>> Wednesday's 9th Circuit decision grew out of revelations that between 2013 and 2019, X mistakenly incorporated users' email addresses and phone numbers into an ad platform that allows companies to use their own marketing lists to target ads on the social platform.

>> In 2022, the Federal Trade Commission fined X $150 million over the privacy gaffe.

>> That same year, Washington resident Glen Morgan brought a class-action complaint against the company. He alleged that the ad-targeting glitch violated a Washington law prohibiting anyone from using “fraudulent, deceptive, or false means” to obtain telephone records of state residents.

>> X urged Dimke to dismiss Morgan's complaint for several reasons. Among other arguments, the company argued merely obtaining a user's phone number from him or her doesn't violate the state pretexting law, which refers to telephone “records.”

>> “If the legislature meant for 'telephone record' to include something as basic as the user’s own number, it surely would have said as much,” X argued in a written motion.

https://www.mediapost.com/publications/article/405501/None

sgarland 2 days ago

Tangential: please do not use a phone number as a PK. Aside from the nightmare of normalizing them, there is zero guarantee that someone will keep the same number.

ponector 2 days ago

Even better: phone numbers are redistributed after some time of inactivity.

godelski 1 day ago

Also fun fact, people mistype and mistranscribe data. Some people even... put down fake phone numbers because they don't want to be tracked!

I would think in a world where we constantly get spam calls and texts that people would understand that a phone number is not a good PKI. I mean we literally don't answer calls from unknown numbers because of this. How is it that we can only look at these things in one direction but not the other?

azinman2 2 days ago

OpenAI doesn’t (currently) sell ads. I really cannot see a world where they’re wanting to sell ads to their API users only? It’s not like you need a phone number to use ChatGPT.

To me the obvious example is fraud/abuse protection.

jazzyjackson 2 days ago

You're thinking ads are to advertise products. Ads are to modify behavior to make you more likely to buy products.

ChatGPT has the capacity to modify behavior more subtly than any advertising ever devised. Aggregating knowledge on the person on the other end of the line is key in knowing how to nudge them toward the target behavior. (Note this target behavior may be how to vote in an election, or how to feel about various hot topics.)

ethbr1 1 day ago

> Aggregating knowledge on the person on the other end of the line is key in knowing how to nudge them toward the target behavior.

It also, as Google learned, enables you to increase your revenue per placement. Advertisers will pay more for placement with their desired audience.

lcnPylGDnU4H9OF 2 days ago

> It’s not like you need a phone number to use ChatGPT.

I’m pretty sure you do. Claude too. The only chatbot company I’ve made an account with is Mistral specifically because a phone number was not a registration requirement.

ax0ar 2 days ago

They also require it now.

KomoD 2 days ago

> It’s not like you need a phone number to use ChatGPT.

When I signed up I had to do exactly that.

godelski 1 day ago

  > To me the obvious example is fraud/abuse protection.
Phones are notorious for spam...

Seriously. How can the most prolific means of spam be used to prevent fraud and abuse? (Okay, maybe email is a little more prolific?) Like have you never received a spam call or text? Obviously fraudsters and abusers know how to exploit those systems... it can't be more obvious...

azinman2 1 day ago

It costs money to get a phone number. It’s about friction, not elimination.

What would you do instead?

godelski 1 day ago

I'm saying it clearly isn't enough friction. It's not worth the privacy cost. Which let's be real, those numbers are then commonly sold to those same spammers, even if indirectly.

You are also forgetting it is easy to mask, obscure, and hijack numbers. So it doesn't cost money per number, many times they can use their own number.

There isn't a universal solution, which is the main problem here. Sometimes numbers make sense, most of the time not.

azinman2 1 day ago

Ok so you don’t like phone numbers. I get it.

But you’re OpenAI. You need to do _something_. What do you do?

jazzyjackson 1 day ago

What are they trying to prevent again? Requiring a phone number is one kind of friction for free services like twitter, but this is a service where a user registers a credit card and authorizes charges, they have the legal name of the person paying for a service, what's the phone number for? It's not like OpenAI gives me their phone number so I can call when I'm having an issue.

godelski 16 hours ago

lol dude, they already have my credit card. Look back at the OP. You're arguing that a phone number costs money so pushes people out. You know what else costs money?...

Stop doing things just because others do it. You'll never find a better way if you're always following. You'll never find better ways if you just accept things as they are. If you never push back. Let's be real, the number isn't about identity verification. They have my name and credit card. Unless by "verification" you mean cross matching me with other databases with the intent to sell that information.

You keep pestering me but you won't ask why they need that data. Like you just accept things at face value?

prmoustache 2 days ago

The fact they don't sell ads doesn't mean they are not in the business of selling users data to third parties.

Also Netflix wasn't initially selling ads and there you have after increasing the price of their plans drastically in the last few years the ad supported subscription is probably the #1 plans because most people aren't willing to shed 15 to 25usd/€ every month to watch content that is already littered with ads.

fsmv 2 days ago

If you sell ads you're actually incentivised not to sell data because then your competitors would be able to reach your users without paying you

ethbr1 1 day ago

You're incentivized not to sell targeting data, but you're very incentivized to collect and maintain as much of it as you can, and then offer access using it as a service.

So, at the end of your day, company X has an overdetailed profile of you, rather than each advertiser. (And also, at least in the US, can repackage and sell that data into various products if it chooses)

hnaccount_rng 2 days ago

They don’t need to. It’s totally sufficient that they can correlate your chat history with your identity. That makes other identifiers more valuable, if they can extract your interests

cmenge 2 days ago

The typical use case of an API is not that you personally use it. I have hundreds of clients all go through my API key, and in most cases they themselves are companies who have n clients.

brookst 2 days ago

It’s a good conspiracy theory, but of course it’s scoped to only ChatGPT users who are also developers and using specifically the o3 model via API. So if it is a conspiracy, it’s a fairly non-ambitious one.

hshdhdhj4444 2 days ago

They may not sell ads.

They may still buy data from ad companies and store credit cards, etc.

Many of them link users based on phone number.

azinman2 2 days ago

But to do what with api users? Most api users won’t be individuals…

KomoD 1 day ago

I bet there's way more individuals than companies that use the API

codedokode 2 days ago

Obvious goal is to know the identity of users.

giancarlostoro 2 days ago

Thank you for this comment… a relative of mine spent a ton of money on an AI product that never came a license he cannot use. I told him to contact his states AG just in case.

pembrook 2 days ago

Source: have dealt with fraud at scale before.

Phone number is the only way to reliably stop MOST abuse on a freemium product that doesn't require payment/identity verification upfront. You can easily block VOIP numbers and ensure the person connected to this number is paying for an actual phone plan, which cuts down dramatically on bogus accounts.

Hence why even Facebook requires a unique, non-VOIP phone number to create an account these days.

I'm sure this comment will get downvoted in favor of some other conspiratorial "because they're going to secretly sell my data!" tinfoil post (this is HN of course). But my explanation is the actual reason.

I would love if I could just use email to signup for free accounts everywhere still, but it's just too easily gamed at scale.

LexGray 2 days ago

On the flip side it makes a company seem sparklingly inept when they use VOIP as a method to filter valid users. I haven’t done business with companies like Netflix or Uber because I don’t feel like paying AT&T a cut for identity verification. There are plenty of other methods like digital licenses which are both more secure and with better privacy protections.

pembrook 2 days ago

I wish we could all agree on a better way of auth -- but unfortunately this is all we have. Asking normal people to do anything outside of phone number or email (or 'login with [other account based on phone number or email]' for auth is basically impossible.

exceptione 2 days ago

Maybe they should look into a non-freemium business model. But that won't happen because they want to have as much personal data as possible.

- Parent talks about a paid product. If they wants to burn tokens, they are going to pay for it.

- Those phone requirements do not stop professional abusers, organized crime nor state sponsored groups. Case in point: twitter is overrun by bots, scammers and foreign info-ops swarms.

- Phone requirements might hinder non-professional abusers at best, but we are sidestepping the issue if those corporations deserve that much trust to compel regular users to sell themselves. Maybe the business model just sucks.

pembrook 2 days ago

I don't like requiring phone numbers either, but saying OpenAI shouldn't do freemium model for hottest tech product of this century (AI) is a fundamental misunderstanding of how humans and the world works.

Also, if they don't do freemium they're getting way more valuable information about you than just a phone number.

jimmydorry 2 days ago

What part of this thread relates to freemium? Use of the API requires tokens that are paid for. General use of the AI via the web interface does not require a phone number.

Only requiring the phone number for API users feels needlessly invasive and is not explained by a vague "countering fraud and abuse" for a paid product...

SheinH 2 days ago

The discussion wasn't about freemium products though. Someone mentioned that they paid 20 bucks for OpenAI's API already and then they were asked for more verification.

juros 1 day ago

Personally I found that rejecting disposable/temporary emails and flagging requests behind VPNs filtered out 99% of abuse on my sites.

No need to ask for a phone or card -- or worse, biometric data! -- which also removes friction.

AnthonyMouse 2 days ago

> I'm sure this comment will get downvoted in favor of some other conspiratorial "because they're going to secretly sell my data!" tinfoil post (this is HN of course). But my explanation is the actual reason.

Your explanation is inconsistent with the link in these comments showing Twitter getting fined for doing the opposite.

> Hence why even Facebook requires a unique, non-VOIP phone number to create an account these days.

Facebook is the company most known for disingenuous tracking schemes. They just got caught with their app running a service on localhost to provide tracking IDs to random shady third party websites.

> You can easily block VOIP numbers and ensure the person connected to this number is paying for an actual phone plan, which cuts down dramatically on bogus accounts.

There isn't any such thing as a "VOIP number", all phone numbers are phone numbers. There are only some profiteers claiming they can tell you that in exchange for money. Between MVNOs, small carriers, forwarding services, number portability, data inaccuracy and foreign users, those databases are practically random number generators with massive false positive rates.

Meanwhile major carriers are more than happy to give phone numbers in their ranges to spammers in bulk, to the point that this is now acting as a profit center for the spammers and allowing them to expand their spamming operations because they can get a large number of phone numbers those services claim aren't "VOIP numbers", use them for spamming the services they want to spam, and then sell cheap or ad-supported SMS service at a profit to other spammers or privacy-conscious people who want to sign up for a service they haven't used that number at yet.

charliebwrites 2 days ago

Doesn’t Sam Altman own a crypto currency company [1] that specifically collects biometric data to identify people?

Seems familiar…

[1] https://www.forbes.com/advisor/investing/cryptocurrency/what...

jjani 2 days ago

GP did mention this :)

> I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder

93po 2 days ago

the core tech and premise doesnt collect biometric data, but biometric data is collected for training purposes with consent and compensation. There is endless misinformation (willfully and ignorantly) around worldcoin but it is not, at its core, a biometric collection company

malfist 2 days ago

Collecting biometrics for training purposes is still collecting biometrics.

93po 19 hours ago

the original claim was "it collects biometrics to identify people" and that's just factually wrong. worldcoin in general is not about identification, in fact it's specifically designed to not identify people. its only purpose is to verify "does this private key have an association to any hash that was created after we scanned a unique set of human retinas". it cant even tell you which retinas it's associated with - the data simply doesn't exist

ddtaylor 2 days ago

I also am using OpenRouter because OpenAI isn't a great fit for me. I also stopped using OpenAI because they expire your API credits even if you don't use them. Yeah, it's only $10, but I'm not spending another dime with them.

numlocked 2 days ago

Hi - I'm the COO of OpenRouter. In practice we don't expire the credits, but have to reserve the right to, or else we have a uncapped liability literally forever. Can't operate that way :) Everyone who issues credits on a platform has to have some way of expiring them. It's not a profit center for us, or part of our P&L; just a protection we have to have.

mitthrowaway2 2 days ago

If you're worried about the unlimited liability, how about you refund the credits instead of expiring them?

kfrane 2 days ago

From my experience with billing systems it is usually not possible to refund a transaction after 6 or 12 months.

gotimo 2 days ago

even possible with a some of them, but even in that case they're usually not "refunding" as much as they're just "making a new transaction for the same anount the other way" which does the same at the surface until reversals, voids or rejections happen and it all becomes a mess.

diggan 2 days ago

Seems like a weird question to ask OpenRouter Inc, a for-profit company.

Really a shame OpenAI left their non-profit (and open) roots, could have been something different but nope, the machine ate them whole.

azemetre 2 days ago

Why is it a bad thing to ask for a company to do right by their paid customers? This type of policy absolutely causes the company to lose more business in the future because it shows customers that they don't care about customers.

I never heard of OpenRouter prior to this thread, but will now never use them and advocate they never be used either.

mitthrowaway2 2 days ago

A fair refund policy is not in conflict with a company being for-profit. I (and it seems many others) would be much less inclined to buy credits from a company that will expire them if I don't use it, and more inclined to buy credits from a service that will refund them if I end up not using it. Once I've bought them I'm more likely to use them. And in addition to reducing that purchasing friction and gaining market share, they can get the time-value of the money between when I bought the credits and when they eventually refund them.

Enlightened self-interest is when you realize that you win by being good to your customers, instead of treating customer service like a zero-sum game.

otterley 1 day ago

Out of curiosity, what makes you different from a retailer or restaurant that has the same problem?

carstenhag 2 days ago

Why only 365 days? Would be way fairer and still ok for you (if it's such a big issue) to expire them after 5 years.

bonki 2 days ago

I wonder if they do this everywhere, in certain jurisdictions this is illegal.

johnnyyyy 2 days ago

then you shouldn’t use OpenRouter. ToS: 4.2 Credit Expiration; Auto Recharge OpenRouter reserves the right to expire unused credits three hundred sixty-five (365) days after purchase

cactusplant7374 2 days ago

That is so sleezy.

cedws 2 days ago

After how long do they expire?

zeograd 2 days ago

IIRC, 1 year

0xdeafbeef 2 days ago

Same for anthropic

fakedang 2 days ago

As someone not in the US, I do a straight nope out whenever I see a Persona request. I advise everyone else to do the same. Afaik, it's used by LinkedIn and Doordash too.

cess11 2 days ago

I suspect their data collection might not be legal in the EU.

https://withpersona.com/legal/privacy-policy

To me it looks like an extremely aggressive data pump.

wqaatwt 2 days ago

There are stories about e.g. Hetzner requiring all sorts of data from people who want to open/verify accounts so perhaps not. Might just be an anti “money laundering” thing. Especially if the credit card company ends up refunding everything..

Hetzner_OL 1 day ago

Hi there, During our KYC process, we do sometimes ask customers to submit additional information, including IDs, so that we can verify their identity. However, we follow the General Data Protection Regulation in the EU, amongst other regulations. So we only keep that data for the account verification process. We also have a data protection officer and team who can answer questions potential customers have about data protection measures that we take. Their contact information is here: https://www.hetzner.com/legal/privacy-policy/ --Katie, Hetzner Online

7bit 2 days ago

What stories? Can you back up that claim with some sources please?

zuppy 2 days ago

Infomaniak did request my personal id or passport for creating a company account. I'm not going to provide any extra source for you, this is what happened to me, you can either believe it or not.

7bit 2 days ago

That's not a source and it is not my responsibility to backup the claims you made. That is yours. If you don't have any sources, and admit to just saying things that are not probable, I can also live with that.

wut42 1 day ago

Hetzner is famously notorious for this, but not enough for publications to pick up this. So by your definitions, YEARS of people talking about their experiences with this is nothing?

wqaatwt 2 days ago

What’s the source on OpenAI doing the same? How is it anymore legitimate in anyway?

Which kind of would make the entire “discussion” moot and pointless

Marsymars 2 days ago

Oh I also recently got locked out of my linkedin account until I supply data to Persona.

(So I’m remaining locked out of my linkedin account.)

csomar 2 days ago

> How do I get my $20 back?

Contact support and ask for a refund. Then a charge back.

AstroBen 2 days ago

KYC requirement + OpenAI preserving all logs in the same week?

jjani 2 days ago

OpenAI introduced this with the public availability of o3, so no.

It's also the only LLM provider which has this.

What OpenAI has that the others don't is SamA's insatiable thirst for everyone's biometric data.

mycall 2 days ago

I think KYC has been beaten by AI agents according to RepliBench [0] as obtaining compute requires KYC which has a high success rate in the graphic.

[0] https://www.aisi.gov.uk/work/replibench-measuring-autonomous...

infecto 2 days ago

KYC has been around for a few months I believe. Whenever they released some of the additional thought logs you had to be verified.

baq 2 days ago

Meanwhile the FSB and Mossad happily generate fake identities on demand.

romanovcode 2 days ago

The whole point of identity verification is for the same Mossad to gather your complete profile and everything else they can from OpenAI.

Since Mossad and CIA is essentially one organization they already do it, 100%.

shmoogy 2 days ago

I was excited about trying o3 for my apps but I'm not doing this validation.. thanks for the heads up.

bratao 2 days ago

You are even luck to be able to verify. Mine give me an error about "Session expired" for months!! Support do not reply.

conradev 2 days ago

I was more excited by the process, like, there exists a model out there so powerful it requires KYC

which, after using it, fair! It found a zero day

__float 2 days ago

I think they're probably more concerned about fake accounts and people finding ways to get free stuff.

gscott 2 days ago

China is training their AI models using ChatGPT. They want to stop or slow that down.

olalonde 2 days ago

Why? It seems counterproductive given OpenAI's mission statement: "We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome."

stavros 2 days ago

Because OpenAI's actual mission statement is "money, and lots of it".

conradev 2 days ago

Yeah. I think they need money to deliver AGI

conradev 2 days ago

I actually think they’re worried about foreign actors using it for…

- generating synthetic data to train their own models

- hacking and exploitation research

etc

abeindoria 2 days ago

What free stuff? It requires a paid API.

DrammBA 2 days ago

With no intention to tarnish your pure world view, paid services with low registration requirements are ideal for account laundering and subscription fraud with stolen credit cards

hiatus 2 days ago

> which, after using it, fair! It found a zero day

Source?

sothatsit 2 days ago

Recently, Sean Heelan wrote a post "How I used o3 to find CVE-2025-37899, a remote zeroday vulnerability in the Linux kernel’s SMB implementation". It might be what they are referring to.

Link: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-...

conradev 2 days ago

Yep, that’s the one!

verisimi 2 days ago

> OK, so now, we've gone from "I guess I'll give OpenAI a few bucks for API access" to "I need to verify my organization" to "There's no way in hell I'm agreeing to provide biometric data to a 3rd party I've never heard of that's a 'partner' of the largest AI company and Worldcoin founder. How do I get my $20 back?"

This should be illegal. How many are going to do the same as you, but then think that the effort/time/hassle they would waste to try to get their money back would not be worth it? At which point you've effectively donated money to a corp that implements anti-consumer anti-patterns.

sneak 2 days ago

Yeah, same. I am a paying API customer but I am not doing biometric KYC to talk to a bot.

jiggawatts 2 days ago

This is in part "abuse prevention"[1] and in part marketing. Making customers feel like they're signing up to access state secrets makes the models seem more "special". Sama is well known to use these SV marketing tricks, like invite-only access, waiting lists, etc to psychologically manipulate users into thinking they're begging for entry to an exclusive club instead of just swiping a credit card to access an API.

Google tried this with Google Plus and Google Wave, failed spectacularly, and have ironically stopped with this idiotic "marketing by blocking potential users". I can access Gemini Pro 2.5 without providing a blood sample or signing parchment in triplicate.

[1] Not really though, because a significant percentage of OpenAI's revenue is from spammers and bulk-generation of SOE-optimised garbage. Those are valued customers!

miki123211 2 days ago

Gemini doesn't give you reasoning via API though, at least as far as I'm aware.

jjani 2 days ago

If by reasoning you mean showing CoT, Gemini and OA are the same in this regard - neither provides it, not through the UI nor through the API. The "summaries" both provide have zero value and should be treated as non-existent.

Anthropic exposes reasoning, which has become a big reason to use them for reasoning tasks over the other two despite their pricing. Rather ironic when the other two have been pushing reasoning much harder.

finebalance 2 days ago

Google exposes their reasoning. You can use their new gemini python sdk to get thought traces.

jjani 2 days ago

Google does not expose their reasoning any more. They give "thought summaries" which provide effectively zero value. [1][2]

[1] https://ai.google.dev/gemini-api/docs/thinking#summaries [2] https://discuss.ai.google.dev/t/massive-regression-detailed-...

jiggawatts 2 days ago

Works for me?

Maybe you’re thinking of deep research mode which is web UI only for now.

paulcole 2 days ago

HN Don’t Hate Marketing Challenge

Difficulty: Impossible

5Qn8mNbc2FNCiVV 2 days ago

This feels eerily similar to a post I've read a within the last month. Either I'm having a deja vu, it's a coincidence that the same exact story is mentioned or theres something else going on

bgwalter 2 days ago

What should be going on? A regular Google search for "openai persona verify organization" shows withpersona.com in the second search result.

5Qn8mNbc2FNCiVV 2 days ago

Yeah ok guess I misremembered it a bit but I was curious too and found the previous one I've thought of: https://news.ycombinator.com/item?id=43795406

Barbing 2 days ago

Good eye! In this case, complaints are numerous. See a web search for:

openai persona verification site:community[.]openai[.]com

e.g. a thread with 36 posts beginning Apr 13:

"OpenAI Non-Announcement: Requiring identity card verification for access to new API models and capabilities"

But always good to be on look out for shenanigans :)

Retric 2 days ago

This is OpenAI’s fairly dystopian process, so the exact same thing happens to lots of people.

verisimi 2 days ago

It's a concerted attempt to de-anonymise the internet. Corporate entities are jostling for position as id authorities.

xboxnolifes 2 days ago

This is just the process for OpenAI. It's the same process I went through as well.

predkambrij 2 days ago

Interesting, it works for me through openrouter, without configured openai integration. Although, I have openai account and did verification with openai before. Conspiricy theory would say that they are exchanging PII so openrouter knows who am I :)

coderatlarge 2 days ago

this reminds me of how broadcom maintains the “free” tier of vmware.

teruakohatu 2 days ago

Can you explain? Is it not actually free?

coderatlarge 2 days ago

there are so many non-functional websites and signups required to get to the end of the rainbow that any sane person quits well before getting to any freely distributed software, if, in fact, there still is some.

belter 2 days ago

With all this plus the saving of all chats they can't operate on the EU. But they do ....

nateburke 2 days ago

I don't recall Persona being in the mix last year when I signed up. Interesting development.

gwhr 2 days ago

I think it was a recent thing [1], but I thought they were only considering it

[1] https://techcrunch.com/2025/04/13/access-to-future-ai-models...

righthand 2 days ago

Thank you for reminding me why I’ll never touch these LLM services.

gloosx 2 days ago

>ID Check

Just send them a random passport photo from the Internet, what's the deal? Probably they are just vibe-verifying the photo with "Is it legit passport?" prompt anyways.

_joel 2 days ago

I think modern face verification has moved on, it's been video in all my encounters.

gloosx 2 days ago

still no real human is involved, as they mention their verification is automated and prohabilistic — which is especially funny to hear in context of verification. Im pretty sure even a kid can go around it, e.g. on the video showing a photo of a person holding his passport which you can find online.

tethys 2 days ago

No. You have to turn your head, and stuff. Also, even if this would work, they allow only one verification per person per 90 days.

gloosx 1 day ago

>one verification per person per 90 days.

this is absurd, how do they define "person"? On the internet I can be another person from another country in a minute, another minute I will be a different person from a different country.

_joel 2 days ago

No, that's not how it works.

gloosx 1 day ago

So your saying it's not just feeding your video to the AI model and blindly trusting it's outcome? Any evidence how it works then?

_joel 1 day ago

You can't just hold up a 2d object (passport) and wave it about to try and trick it. There are heuristics at work.

gloosx 1 day ago

For a regular camera - all objects are 2d, it is not equipped with tools to capture depth. What heuristics are you talking about? There is ML model at work which tries to tell whether the object is legit, but it cannot have any real sense of what is on the image: it just relies on a statistically plausible outcome when being fed pixels from your camera screen, which means you definitely can trick it.

_joel 10 hours ago

You have to align your face in certain ways, a random video you found on the internet won't work.

If you don't match your face to the markers overlaid on the screen in a certain way, to gather heuristics about your eye distance, nose etc, then it won't work. Impossible to do with a 2d object you're holding. So yea, it does matter.

There's a literal industry that's popped up to make face identitifaction from your smartphone a thing. You might want to research and catch up on the way they work.

sneak 2 days ago

It requires video and an app. They are collecting facial biometrics.

gloosx 2 days ago

App? So you cannot verify without a mobile phone?

sneak 2 days ago

You can’t sign up in the first place without a mobile phone number. They require a phone number and block VoIP services.

gloosx 1 day ago

Real mobile phone number to receive a code is 10¢. Maybe a bit more, but certainly not more than a dollar.

askl 2 days ago

Crazy, I already gave up registering for chatgpt because they asked for my phone number. I'm not giving that to any random startup.

exceptione 2 days ago

Welcome to tech dystopia.

Hello Human Resource, we have all your data, please upload your bio-metric identity, as well as your personal thoughts.

Building the next phase of a corporate totalitarian state, thank you for your cooperation.

34679 2 days ago

Follow-up. I pasted the above into ChatGPT because I genuinely wanted to know how to get my 20 bucks back. That request is in and they say I'll get a response in 2-3 days. I then asked it "What are the potential risks of agreeing to their terms?" Here's the response:

Agreeing to Persona’s terms, especially for biometric identity verification, involves both privacy and long-term data security risks. Here’s a clear breakdown of the main risks you should be aware of: 1. Biometric Data Collection

Risk: Biometric identifiers (like facial recognition, voiceprints, etc.) are extremely sensitive and irreplaceable if compromised.

    What they collect: Persona may collect a selfie, video, and metadata, and extract biometric templates from those for facial comparison and liveness detection.

    If leaked or abused: Unlike passwords, you can't change your face. A future data breach or misuse could lead to permanent identity compromise.

 2. Data Storage & Retention
Risk: Persona says biometric data is kept for up to one year, but:

    You’re relying on their internal policies, not a legal guarantee.

    There’s no technical detail on how securely it’s stored or whether it’s encrypted at rest.

 Worst-case scenario: Poorly secured biometric templates could be stolen, reused, or matched against other data sets by bad actors or governments.
 3. Third-Party Sharing and Surveillance Risks
Risk: Your biometric and ID data may be shared with subprocessors (partners/vendors) that you haven’t explicitly vetted.

    Persona may transfer your data to cloud providers (like AWS, GCP), verification specialists, or fraud prevention services.

    Depending on jurisdiction, data could be subject to subpoenas, surveillance laws, or government backdoors (especially in the U.S.).

 4. Consent Ambiguity & Future Use
Risk: The fine print often includes vague consent for "quality assurance", "model improvement", or "fraud detection".

    This opens the door to retraining algorithms on your biometric data—even if anonymized, that's still a use of your body as data.

    Their privacy policy may evolve, and new uses of your data could be added later unless you opt out (which may not always be possible).

Should You Agree?

Only if:

    You absolutely need the service that requires this verification.

    You’re aware of the privacy tradeoff and are okay with it.

    You trust that Persona and its partners won’t misuse your biometric data—even a year down the line.
If you’re uneasy about this, you’re not alone. Many developers and privacy advocates refuse to verify with biometrics for non-critical services, and companies like OpenAI are increasingly facing criticism for requiring this.

stevage 2 days ago

Huh, I never considered using ChatGPT to read terms of serivce for me. Nice idea.

_345 2 days ago

o3 is really powerful. I understand it tbh. They don't want scammers and abusers easily accessing it