我发现了一个漏洞。他们找了一个律师。
I found a Vulnerability. They found a Lawyer

原始链接: https://dixken.de/blog/i-found-a-vulnerability-they-found-a-lawyer

## 漏洞披露与潜水保险公司的回应 作为一名潜水教练和平台工程师,我在科科斯岛旅行期间发现了一家大型潜水保险公司会员门户网站的一个严重安全漏洞。该门户网站分配了连续的用户ID,并使用静态的默认密码——一种容易被利用来访问敏感个人数据的组合,包括未成年人的个人资料。 我于2025年4月负责任地披露了该漏洞,遵守了标准的30天禁运期,并联系了保险公司和马耳他的国家网络安全机构(CSIRT Malta),因为该公司总部设在那里。虽然保险公司解决了技术问题,但他们的反应却带有防御性,试图用法律威胁和限制性保密协议来压制我。他们甚至暗示*我*报告漏洞是在犯罪。 尽管漏洞已被修复,但没有确认受影响的用户已被通知——这可能违反了GDPR。这一事件凸显了一种令人担忧的模式:组织优先考虑声誉管理而非数据安全,并惩罚研究人员而不是促进合作。这种“寒蝉效应”会阻碍负责任的披露,并最终损害用户。这段经历强调了需要明确的漏洞披露政策、对研究人员的感谢,以及在数据泄露时对透明度和用户通知的承诺。

## 黑客新闻讨论摘要:安全问题与公司应对 一个黑客新闻帖子讨论了一位用户在其公司发现严重安全漏洞的经历。该用户发现向管理层报告问题导致了压制讨论和避免留下书面记录的尝试,而不是优先识别、通知和修复问题——这与安全最佳实践背道而驰。 许多评论者表示认同,分享了类似的经历,即公司认为承认缺陷是对个人的攻击,并且漏洞被掩盖以保护个人而非用户。一位评论员强调了在像谷歌这样的大公司工作所感受到的幻灭,发现它也存在与小型组织相同的人性弱点。 讨论还涉及了对独立报告机构处理安全问题的需求,保护研究人员免受报复,并确保用户通知。许多人认为应该公开该公司名称,特别是考虑到其位于欧盟以及可能违反 GDPR 的情况,以及缺乏对受影响用户的透明度。总体基调表明对公司安全优先级的广泛愤世嫉俗。
相关文章

原文

I'm a diving instructor. I'm also a platform engineer who spends lots of his time thinking about and implementing infrastructure security. Sometimes those two worlds collide in unexpected ways.

A frigatebird perched on the boat railing next to a dive flag, somewhere off Cocos Island, Costa Rica A Sula sula (Frigatebird) and a dive flag on the actual boat where I found the vulnerability - somewhere off Cocos Island.


While on a 14 day-long dive trip around Cocos Island in Costa Rica, I stumbled across a vulnerability in the member portal of a major diving insurer - one that I'm personally insured through. What I found was so trivial, so fundamentally broken, that I genuinely couldn't believe it hadn't been exploited already.

I disclosed this vulnerability on April 28, 2025 with a standard 30-day embargo period. That embargo expired on May 28, 2025 - over eight months ago. I waited this long to publish because I wanted to give the organization every reasonable opportunity to fully remediate the issue and notify affected users. The vulnerability has since been addressed, but to my knowledge, I have not received confirmation that affected users were notified. I have reached out to the organization to ask for clarification on this matter.

This is the story of what happened when I tried to do the right thing.

To understand why this is so bad, you need to know how the registration process works. As a diving instructor, I register my students (to get them insured) through my account on the portal. I enter their personal information with their consent - name, date of birth, address, phone number, email - and the system creates an account for them. The student then receives an email with their new account credentials: a numeric user ID and a default password. They might log in to complete additional information, or they might never touch the portal again.

When I registered three students in quick succession, they were sitting right next to me and checked their welcome emails. The user IDs were nearly identical - sequential numbers, one after the other. That's when it clicked that something really bad was going on.

Now here's the problem: the portal used incrementing numeric user IDs for login. User XXXXXX0, XXXXXX1, XXXXXX2, and so on. That alone is a red flag, but it gets worse: every account was provisioned with a static default password that was never enforced to be changed on first login. And many users - especially students who had their accounts created for them by their instructors - never changed it.

So the "authentication" to access a user's full profile - name, address, phone number, email, date of birth - was:

  1. Guess a number.
  2. Type the same default password that every account shares on account creation.
  3. There's a good chance you get in.

That's it. No rate limiting. No account lockout. No MFA. Just an incrementing integer and a password that might as well have been password123.

I verified the issue with the minimum access necessary to confirm the scope - and stopped immediately after. A significant portion of IDs in the sample were still using the default password. The data exposed wasn't just email addresses - it included full personal profiles, including those of underage students.

To confirm the issue wasn't limited to a handful of accounts, I wrote a short script that automated what could be done manually in any browser. I first tried Python requests, but the portal's login mechanism was convoluted enough that a clean session wasn't feasible. So I used Selenium - a browser automation tool - which just drives a real browser through the same steps any user would take manually.

The following is a simplified, non-functional excerpt of the script. Identifiers, endpoints, and implementation details have been HEAVILY removed or altered - this will not run as-is and is included only to illustrate the triviality of the issue:



def check_account(user_id):
    driver = webdriver.Chrome()
    driver.get(LOGIN_URL)

    driver.find_element(...).send_keys(str(user_id)) 
    driver.find_element(...).send_keys(DEFAULT_PASSWORD) 
    driver.find_element(...).click() 

    if "failed" not in driver.page_source:
        driver.get(PROFILE_URL)
        save_profile_data(user_id, driver) 
        driver.get(LOGOUT_URL)

    driver.quit()

for uid in range(START_ID, END_ID):
    check_account(uid)

No exploits, no buffer overflows, no zero-days. Just a login form, a number, and a default password that was set for each student on creation. This is the kind of issue that anyone could discover and reproduce in an afternoon.

Here's what the output looked like for a single successful login - redacted, but structurally identical to the real output:

--- Dump for user ID: XXXXXXX ---

[Table 1]
Member ID: XXXXXXX
E-Mail address: [email protected]
Language: English
Time Zone: Greenwich Mean Time

[Table 2]
First name: Jane
Middle name:
Last Name: Doe
Gender: Female
Date of birth: 2011
Birthplace: Switzerland
Nationality: Switzerland
Mobile number: *redacted*
Phone number: *redacted*
Business phone number:
Fax number:
Number to call in case of emergency: *redacted*
Skype:
Education level:
Job:
Diving since:
Number of dives per year:
Heard about *redacted* from:

[Table 3]
c/o:
Details: Local 50
Address: Calle Example
No.: 54
Zip: 35140
City: Sometown
Country: Spain
Region: Some Region
Tax ID:

Read that date of birth again. 2011. This person was 14 years old at the time. Their full name, email address, phone number, nationality, and physical home address - all accessible with nothing more than a sequential number and a default password. And this wasn't a one-off. Multiple profiles belonged to minors.

I want to be absolutely clear: all data obtained during this verification has been permanently deleted. I did not retain any personal information, I did not attempt to access anything beyond what was required to confirm the report, and I stopped as soon as the scope was established.

I did everything by the book. I contacted CSIRT Malta (MaltaCIP) first - since the organization is registered in Malta, this is the competent national authority. The Maltese National Coordinated Vulnerability Disclosure Policy (NCVDP) explicitly requires that confirmed vulnerabilities be reported to both the responsible organization and CSIRTMalta.

Then I emailed the organisation directly, CC'ing CSIRT:

Dear Sir or Madam,

As a fellow diving instructor insured through [the organization] and a full-time Linux Platform Engineer, I am contacting you to responsibly disclose a critical vulnerability I identified within the [the organization]'s user account system.

During recent testing, I discovered that user accounts - including those of underage students - are accessible through a combination of predictable User ID enumeration (incrementing user IDs) and the use of a static default password that is not enforced to be changed upon first login. This misconfiguration currently exposes sensitive personal data (e.g., names, addresses, contact information - including phone numbers and emails -, dates of birth) and represents multiple GDPR violations.

Key details:

  • Password reuse across accounts without forced password reset
  • Predictable, incremental user ID enumeration
  • Exposure of sensitive and underage user data without adequate safeguards

For initial confirmation, I am attaching a screenshot from Member ID XXXXXXX showing the exposed data, partly redacted for privacy reasons.

Additionally, for transparency and validation, I have shared my proof-of-concept code securely via an encrypted paste service: [link redacted]

In the spirit of responsible disclosure, I have already informed CSIRT Malta (in CC) to officially initiate a reporting process, given [the organization]'s operational presence in Malta.

I kindly request that [the organization] acknowledges receipt of this disclosure within 7 days.

I am offering a window of 30 days from today the 28th of April 2025 for [the organization] to mitigate or resolve the vulnerability before I consider any public disclosure.

Please note that I am fully available to assist your IT team with technical details, verification steps and recommendations from a security perspective.

[contact details]

I strongly recommend assigning an IT-Security Point of Contact (PoC) for direct collaboration on this issue.

Thank you very much for your attention to this critical matter. I am looking forward to working with you towards a secure resolution.

Both of these timelines are standard - if anything, generous - in responsible disclosure frameworks.

Two days later, I got a reply. Not from their IT team. From their Data Privacy Officers (DPO's) law firm.

The letter opened politely enough - they acknowledged the issue and said they'd launched an investigation. They even mentioned they were resetting default passwords and planning to roll out 2FA. Good.

But then the tone shifted:

While we genuinely appreciate your seemingly good intentions and transparency in highlighting this matter to our attention, we must respectfully note that notifying the authorities prior to contacting the Group creates additional complexities in how the matter is perceived and addressed and also exposes us to unfair liability.

Let me translate: "We wish you hadn't told the government about our security issue."

It got better:

We also do not appreciate your threat to make this matter public [...] and remind you that you may be held accountable for any damage we, or the data subjects, may suffer as a result of your own actions, which actions likely constitute a criminal offence under Maltese law.

So, to be clear: their portal had a default password on every account, exposing personal data including that of children, and I'm the one who "likely" committed a criminal offence by finding it and telling them.

They also sent a declaration they wanted me to sign - while requesting my passport ID - confirming I'd deleted all data, wouldn't disclose anything, and would keep the entire matter "strictly confidential." The deadline? End of business the same day they sent it.

This declaration included the following gem:

I also declare that I shall keep the content of this declaration strictly confidential.

That's an NDA with extra steps: I was being asked to sign away my right to discuss the disclosure process itself - including the fact that I found a vulnerability in their system - under threat of legal action.

Then came the reminders. One "friendly" reminder. Then an "urgent" one. Sign the declaration. De-escalate. Move on. Quietly.

I generally refuse to sign confidentiality clauses in cases involving exposure of sensitive information, and I did so here as well. Coordinated disclosure depends on transparency and trust between researchers and organizations: trust that affected users will be informed, and trust that a report leads to real remediation.

Given that the organization in question had already breached that trust by exposing personal data through weak controls, I wasn’t willing to grant blanket confidentiality that could be used to keep the incident out of public scrutiny. And with trying to actual silence me through legal threats, they had already made it clear that their priority was reputation management over user data protection. So I stood my ground.

Instead, I offered to sign a modified declaration confirming data deletion. I had no interest in retaining anyone’s personal data, but I was not going to agree to silence about the disclosure process itself.

I also pointed out that, under Malta’s NCVDP, involving CSIRT Malta is part of the expected reporting path - not a hostile act - and that publishing post-remediation analyses is standard practice in the security community.

Their response doubled down. They cited Article 337E of the Maltese Criminal Code - computer misuse - and helpfully reminded me that:

Art. 337E of the Criminal Code also provides that "If any act is committed outside Malta which, had it been committed in Malta, would have constituted an offence [...] it shall [...] be deemed to have been committed in Malta." Meaning that your actions would be deemed a criminal offence in Malta, even if committed in another country.

They also made their position on disclosure crystal clear, after I reiterated my refusal to sign their NDA:

We object strongly to the use of [the organization's name] in any such blogs or conferences you may write/attend as this would be a disproportionate harm to [the organization's] reputation [...]. We reserve our rights at law to hold you responsible for any damages [the organization] may suffer as a result of any such public disclosures you may make.

That's fine by me. Because here's the thing: The vulnerability has been fixed. Default passwords have been reset. 2FA is being rolled out. I feel sorry for the developer(s) who had to clean up this mess, but at least the issue is no longer exploitable. Sure, it would have been better if the organization had thanked me and taken responsibility for notifying affected users. If the incident qualified as a personal data breach (which it does) and was likely to result in a (high) risk to individuals - especially given minors were involved - GDPR Articles 33 and 34 generally require notification to the supervisory authority and communication to affected data subjects.

GDPR Article 34(1) When the personal data breach is likely to result in a high risk to the rights and freedoms of natural persons, the controller shall communicate the personal data breach to the data subject without undue delay.

GDPR Article 34(2) The communication to the data subject referred to in paragraph 1 of this Article shall describe in clear and plain language the nature of the personal data breach and contain at least the information and measures referred to in points (b), (c) and (d) of Article 33(3).

I have not received confirmation that those notifications were ever carried out.

My favourite part was the organization's position on whose fault this actually was:

We contend that it is the responsibility of users to change their own password (after we allocate a default one).

Read that again. A company that assigned the same default password to every account, never forced a password change, and used incrementing numeric IDs as usernames is blaming the users for not securing their own accounts. Accounts that include those of minors.

Just a quick reminder:

GDPR Article 5(1)(f) (integrity and confidentiality): Personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures.

Under GDPR, the data controller (namely: the organization) is responsible for implementing appropriate technical and organizational measures to ensure data security. A static default password on an IDOR-vulnerable portal is not an "appropriate measure" by any definition.

GDPR Article 24(1) (controller responsibility): Taking into account the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons, the controller shall implement appropriate technical and organisational measures to ensure and to be able to demonstrate that processing is performed in accordance with this Regulation. Those measures shall be reviewed and updated where necessary.

This isn't an isolated case. The security research community has been dealing with this pattern for decades: find a vulnerability, report it responsibly, get threatened with legal action. It's so common it has a name - the chilling effect.

Organizations that respond to disclosure with lawyers instead of engineers are telling the world something important: they care more about their reputation than about the data they're supposed to protect.

And the real irony? The legal threats are the reputation damage. Not the vulnerability itself - vulnerabilities happen to everyone. It's the response that tells you everything about an organization's security culture.

  1. Acknowledge the report - they did this, to be fair.
  2. Fix the vulnerability - they started on this too.
  3. Thank the researcher - instead of threatening them with criminal prosecution.
  4. Have a CVD policy - so researchers know how to report issues and what to expect.
  5. Notify affected users - especially the parents of underage members whose data was exposed.
  6. Not try to silence the researcher with NDAs disguised as "declarations."

If you're an organization:

  • Publish a Coordinated Vulnerability Disclosure policy. It doesn't have to be complex - maybe begin with a security.txt file and a clear process that favors transparency.
  • Thank researchers for helping you improve your security posture.
  • Don't shoot the messenger. The person reporting the bug is not your enemy. The bug is.
  • Don't blame your users for security failures that are your responsibility as a data controller.

If you're a security researcher:

  • Always involve your national CSIRT. It protects you and creates an official record.
  • Document everything. Every email, every timestamp, every response.
  • Don't sign NDAs that prevent you from discussing the disclosure process. But you can agree to delete data (and MUST do so!) without agreeing to silence.
  • Know your rights. Many jurisdictions have legal protections for good-faith security research. The EU's NIS2 Directive encourages coordinated vulnerability disclosure.

Because right now, in 2026, reporting a trivial vulnerability exposing personal data - including that of children - still gets met with legal threats instead of gratitude. And that's a problem for all of us.

联系我们 contact @ memedata.com