14 KiB

Ethical Problems with the service:

There is Gitlab software, and there are services that use that software. The software comes in two varieties: "Community Edition" and "Enterprise Edition". The Community Edition is free software. These services run Gitlab as their backend:

host notes reCAPTCHA impedes registration possibly restricted to BSD efforts possibly restricted to Jami efforts flagship instance; uses hCAPTCHA; heavily restricted with discriminatory policies possibly restricted to Freedesktop efforts possibly restricted to the Tor Project (Google reCAPTCHA is used) possibly restricted to efforts

The rest of this article is focused on the service. These are the ethical problems with that specific instance:

  1. Sexist treatment toward saleswomen who are told to wear dresses, heels, etc.
  2. is a Google-hosted service of CIA agency IQT (in-q-tel). Consequently, the service is inaccessible to users in Crimea, Cuba, Iran, North Korea, Sudan, and Syria, due to sanctions imposed by Office of Foreign Assets Control of the United States. Thus FSF criteria C2 is unsatisfied. Quite perversely, this actually impacts developers who contributed free software to (without compensation), and who are now refused service because of their national origin.
  3. A survey shows that a significant number of bug reports are withheld when the bug tracker is inside a restrictive or politically controversial walled-garden like MS Github or Even those willing and able to file a bug report are blocked if they are in Crimea, Cuba, Iran, North Korea, Sudan, or Syria. The chilling effect on bug reports reduces the software quality of the commons globally.
  4. proxies through privacy abuser CloudFlare. Because we cannot check the HTTPS connection between the Gitlab EE backend and CloudFlare's data center, FSF criteria C6 is unverifiable. Moreover, users are deceived by the padlock into thinking they have e2ee with's host at the other endpoint, when actually all traffic is surreptitiously intercepted. There is absolute certainty that the visitor-side tunnel terminates at a CloudFlare data center, which guarantees that CloudFlare sees all traffic including usernames and unhashed passwords. At a minimum this undermines the spirit and intent of FSF criteria C6. FSF criteria B1 is also unsatisfied due to deliberate sharing all traffic with CloudFlare.
  5. Excessive tracking renders FSF criteria C4 unsatisfied.
  6. Contrary to widespread confused notions about Gitlab being free software, the service does not run the Gitlab Community Edition (GCE). It runs the proprietary "enterprise edition". Even if were to switch to GCE, visitors would still be forced to run non-free software imposed by their content delivery network (CDN).
  7. The single most important feature of any free software repository is the ability to clone a project. It is the only feature that secures, delivers, and enables users to exercise all software freedoms. Yet's walled garden is so restricted that Tor users are not even permitted to clone a project:

consequently FSF criteria C3 is unmet.

  1. treats Tor users trying to register with hostility. Access is inconvenient in some cases (e.g. GUI users), while access is outright denied to other Tor users (e.g. terminal users with non-GUI browsers, browsers without javascript capability, and users who happen to use a high traffic exit node). FSF criteria C3 is therefore unmet.
  2. refuses service to users who attempt to register with a forwarding email address to track spam and to protect their more sensitive internal email address. This means people who approach to contribute a bug report charitably are forced to compromise their own security. This ultimately discourages bug reports.
  3. Hostile treatment of Tor users after they've established an account and have proven to be a non-spammer. The irony is that a Tor user was denied collaboration with the PRISM-Break Project (PBP) because a PRISM privacy abuser was given the power to control who can participate. Google should not have that power over the PRISM Break project. (note that PBP refused to leave, so they have a hand in the oppression of their own contributors).

Regarding the last item above, a user was simply trying to edit an existing message that they had already posted and a CAPTCHA was forced on them. There are several problems with's rampant abuse of CAPTCHAs:

  1. CAPTCHAs break robots and robots are not necessarily malicious. E.g. An author could have had a robot correcting a widespread misspelling error in all their posts.
  2. CAPTCHAs inflict uncompensated human labor and undermine the 13th amendment in the US (note the CIA's role in this regard). CAPTCHAs put humans to work for machines when it is machines that should work for humans. The fruits of the human labor does not go to the laborer, but instead hCAPTCHA pays CloudFlare a cash reward. Consequently the laborers benefit their oppressor.
  3. CAPTCHAs are defeated. Spammers find it economical to use third-world sweat shop labor for CAPTCHAs while legitimate users have this burden of dealing with CAPTCHAs that are often broken.
  4. hCAPTCHAs compromise security as a consequence of surveillance capitalism that entails collection of IP address and browser print.
    • anonymity is compromised (the article covers reCAPTCHA but hCAPTCHA is vulnerable for the same reasons).
    • the third-party javascript that hCAPTCHA executes could linger well after the CAPTCHA puzzle is solved and intercept user information and actions. They could even pull an eBay move and scan your LAN ports.
  5. GUI CAPTCHAs fail to meet WCAG standards and thus discriminate against impaired people, ultimately blocking satisfaction of FSF criteria C2:
WCAG Principle How the Principle is Violated
1.1: Provide text alternatives for any non-text content so that it can be changed into other forms people need, such as large print, braille, speech, symbols or simpler language. hCAPTCHA wholly relies on graphical images. There is no option for a text or audible puzzle.
1.2: Time-based media: Provide alternatives for time-based media. hCAPTCHA has an invisible timer that the user cannot control.
1.3: Create content that can be presented in different ways (for example simpler layout) without losing information or structure. When a user attempts to use lynx, w3m, wget, cURL, or any other text-based tool, the CAPTCHA is inaccessible and thus unsolvable. The website's content is thus also inaccessible. Moreover, CloudFlare attacks robots -- robots that could help provide an alternative user interface for users that are impaired or handicapped. Robots often use wget or cURL to obtain data that is presented to the user in a more useful way.
2.1: Make all functionality available from a keyboard. The hCAPTCHA does not accept answers from the keyboard.
2.2: Provide users enough time to read and use content. If you don't solve the hCAPTCHA puzzle fast enough, the puzzle is removed and the user must start over. Some puzzles are vague and need time to ponder that exceeds the time limit.
3.1: Make text content readable and understandable. When the CAPTCHA says "select all images with parking meters", how is someone in Ireland supposed to know what a parking meter in the USA looks like? When the CAPTCHA says "click on all squares with a motorcycle" and shows an image of an apparent motorcycle instrument panel, it's unclear if that qualifies (it could be a moped). Another image showed a scooter with a faring that resembled a sports bike. Some people would consider it a motorcycle. When the CAPTCHA said "click on all squares with a train", some of the images were the interior of a subway train or tram. Some people consider a subway to be a train underground, while others don't equate the two. The instructions are also sometimes given in a language the user doesn't understand.
3.2: Make web pages appear and operate in predictable ways. It's unpredictable whether the IP reputation assessment will invoke a CAPTCHA and also unpredictable whether a CAPTCHA solution will be accepted. The time you have to solve the puzzle is also unpredictable.
4.1.: Maximize compatibility with current and future user agents, including assistive technologies. When a user attempts to use lynx, w3m, wget, cURL or any other text-based tool, the blockade imposes tooling limitations on the user.
  1. Users are forced to execute non-free javascript, thus violating FSF criteria C0.0.
  2. The CAPTCHA requires a GUI, thus denying service to users of text-based clients including the git command.
  3. The CAPTCHAs are often broken. This amounts to a denial of service:
    • E.g.1: the CAPTCHA server itself refuses to give the puzzle saying there is too much activity.
    • E.g.2: has switched back and forth between Google's reCAPTCHA and hCAPTCHA (by Intuition Machines, Inc.) but at the moment they've settled on hCAPTCHA. Both have broken and both default to access denial in that event:
      Google reCAPTCHA (pre-2021) hCAPTCHA ( today)
      1. The CAPTCHAs are often unsolvable. * E.g.1: the CAPTCHA puzzle is broken by ambiguity (is one pixel in a grid cell of a pole holding a street sign considered a street sign?) * E.g.2: the puzzle is expressed in a language the viewer doesn't understand. 1. Network neutrality abuse: at moments when Google reCAPTCHA is used, there is an access inequality whereby users logged into Google accounts are given [more favorable treatment][netneutrality] by the CAPTCHA (but then they take on more privacy abuse). Tor users are given extra harsh treatment.

      //: # (I solved the hCAPTCHA, got a green checkmark, and then it looped back to an empty checkbox and I was forced to solve the hCAPTCHA for a 2nd time. And both times I had to solve 2 windows (4 windows in total [36 images]). After solving the 2nd hCAPTCHA brought me to a 404 error. So after all the hard work I was still blocked.)

      [grcNonfree]: # (recaptcha/api.js) [signalGRC]: # (