I run a small research project under a pseudonym on a v3 onion service. The work is mostly passive: reading forums, indexing leak-site pages, tracking which ransomware crews are active and which are dormant. People sometimes ask how I decide what's okay and what isn't. I don't have a committee behind me, so I had to write my own code. Here's what it looks like.

Observation vs. Participation

The line I hold hardest is the line between observation and participation. Reading a public leak-site index is observation. Downloading the dumped data — which contains real victims' PII — is participation in the harm, full stop. Logging into a criminal forum to listen is observation. Posting, upvoting, offering services, or asking questions that help the ecosystem is participation.

  • Observe: public indexes, descriptor metadata, post timestamps, claimed victim counts.
  • Don't touch: victim data, credentials from combolists, malware samples outside a proper isolated sandbox with a research purpose.
  • Never do: purchase anything, bid on access, "test" services, engage vendors.

The inflection point is this: if your activity helps a marketplace look liveliier, drives its reputation, or provides value to the operator, you are no longer a researcher. You are a customer.

Legal Lines

I am not a lawyer and this is not legal advice. But I work within a few bright lines that your counsel would very likely agree with:

  • Passive browsing of publicly reachable onion sites is generally equivalent to browsing clearnet sites — no CFAA "unauthorized access" if the site is open.
  • Any interaction that requires credentials someone else owns is off-limits regardless of how easy it is.
  • Child sexual abuse material is a third rail — accidental encounters should be backed out of instantly, reported to NCMEC via appropriate channels if needed, and never stored in any form. Some academic work maintains a strict "hash-only, never download" policy for this reason.
  • Funding, even small amounts, for "research purposes" funds actual crime. Don't.

Protecting Sources and Subjects

Researchers sometimes become the last line of privacy for people who haven't consented to being studied. Victims whose data appears on leak sites didn't sign up for a second round of exposure when a well-meaning academic publishes screenshots. My rules:

  • Never publish victim names, even for "famous" breaches, unless the victim org has already publicly acknowledged it.
  • Redact identifiers from screenshots. Blur faces, crop emails, hash account handles.
  • If a source contacts you, protect them like a journalist would — SecureDrop-style hygiene, no cross-referencing with clearnet identities, no rich metadata in the files you keep.

Responsible Disclosure and Not Becoming the Ecosystem

If you find a vulnerability in a legitimate tool (Tor, Whonix, a privacy app), responsible disclosure applies exactly as it does on clearnet. If you find something in a criminal operator's infrastructure, the calculus is messier. Tipping them to a flaw strengthens them; exploiting it might harm their victims further or expose evidence that law enforcement is working with. The usually-right move is to quietly share findings with a trusted LE contact (FBI IC3, your national CERT) and then step back. You are not the hero here, and you do not have the mandate.

Academic communities use Institutional Review Boards for human-subjects research. The informal analog for dark-web work is a peer — someone more experienced who will tell you "no, that's over the line" before you do it. Find that person. I have mine. The work is better for it.