this post was submitted on 26 Jun 2023
-1 points (49.5% liked)

The Agora

1697 readers
1 users here now

In the spirit of the Ancient Greek Agora, we invite you to join our vibrant community - a contemporary meeting place for the exchange of ideas, inspired by the practices of old. Just as the Agora served as the heart of public life in Ancient Athens, our platform is designed to be the epicenter of meaningful discussion and thought-provoking dialogue.

Here, you are encouraged to speak your mind, share your insights, and engage in stimulating discussions. This is your opportunity to shape and influence our collective journey, just like the free citizens of Athens who gathered at the Agora to make significant decisions that impacted their society.

You're not alone in your quest for knowledge and understanding. In this community, you'll find support from like-minded individuals who, like you, are eager to explore new perspectives, challenge their preconceptions, and grow intellectually.

Remember, every voice matters and your contribution can make a difference. We believe that through open dialogue, mutual respect, and a shared commitment to discovery, we can foster a community that embodies the democratic spirit of the Agora in our modern world.

Community guidelines
New posts should begin with one of the following:

Only moderators may create a [Vote] post.

Voting History & Results

founded 2 years ago
MODERATORS
 

Everyone I have something very important to say about The Agora.

The Problem

Let me be super clear here to something people don't seem to understand about lemmy and the fediverse. Votes mean absolutely nothing. No less than nothing.

In the fediverse, anyone can open a instance, create as many users as they want and one person can easily vote 10,000 times. I'm serious. This is not hard to do.

Voting at best is a guide to what is entertaining.

As soon as you allow a incentive the vast majority of votes will be fake. They might already be mostly fake.

If you try to make any decision using votes as a guide someone WILL manipulate votes to control YOU.

one solution (think of others too!)

A counsel of trusted users.

The admin, top mods may set up a group to decide on who to ban and what instances to defederate from. You will not get it right 100% of the time but you also won't be controlled by one guy in his basement, running 4 instances and 1,000 alts.

Now i'm gonna go back to shit posting.

you are viewing a single comment's thread
view the rest of the comments
[–] ItsJason@sh.itjust.works 11 points 2 years ago (1 children)

I think OP raises a valid concern. In the near term, I don't know what will be voted on that will be worth the effort of spinning up a bot army. But it could happen eventually. Large floods of votes might be easier to detect. Smaller bot armies could be harder, but still impactful to the outcome.

Perhaps we could fire up some kind of identity service. A user goes there, puts in their username, solves a CAPTCHA, and gets back a url to a page that contains their username. The pages can be specific to a particular vote so urls aren't reusable. Every time a user votes, they need to solve a new CAPTCHA. User will include their identity url when voting.

Admins can confirm that user names and identity urls match.

Could be more efficient ways to do it, this was my first thought.

[–] jjagaimo@lemmy.one 1 points 2 years ago (2 children)

A public/private key pair is more effective. Thats how "https" sites work. SSL/TLS uses certificates to authenticate who is who. Every site with https has a SSL certificate which basically contains the public key of the site. The site can then use its private key to sign all data it sends to you, and you can verify that it actually came from them by trying to decrypt it with their public key. Certificates are granted by a certificate authority, which are basically the identity service you are talking about. Certificates are usually themselves signed by the certificate authority so that you can tell that someone didnt just man-in-the-middle-attack you and swap out the certificate, and the site can still directly serve you the certificate instead of you needing to go elsewhere to find the certificate

The problem with this is severalfold. You would need some kind of digital identity organization(s) to be handling sensitive user data. This organization would need to

  1. Be trusted. Trust is the key to having these things work. Certificate authorities are often large companies with a vested interest in having people keep business with them, so they are highly unlikely to mess with people's data. If you can't trust the organization, you can't trust any certificate issued or signed by them.

  2. Be secure. Leaking data or being compromised is completely unnaceptable for this type of service

  3. Know your identity. The ONLY way to be 100% sure that it isnt someone just making a new account and a new key or certificate (e.g. bots) would be to verify someone's details through some kind of identification. This is pretty bad for several reasons. Firstly it puts more data at risk in the event of a security breach. Secondly there is the risk of doxxing or connecting your real identity to your online identity should your data be leaked. Thirdly it could allow impersonation using leaked keys (though im sure theres a way to cryptographically timestamp things and then just mark the key as invalid). Fourth, you could allow one person to make multiple certificates for various accounts to keep them separately identifiable, but this would also potentially enable making many alts.

There may be less agressive ways of verifying individual humanness of a user, or just preventing bots as in that 3rd point may be better. For example, a simple sign up with questions to weed out bots, which generates an identity (certificate / key) which you can then add to your account. That would then move the bot target from various lemmy instances, solely to the certificate authorities. Certificate authorities would probably need to be a smaller number of trusted sources, as making them "spin up your own" means that anyone could do just that, with less pure intentions or modified code that lets them impersonate other users as bots. That sucks because it goes against the fundamental idea that anyone should be able to do it themselves and the open source ideology. Additionally, you would need to invest in tools to prevent DDOS attacks and chatgpt bots.

There most certainly exists user authentication authorities, however it wouldn't surprise me a bit if there were no suitable drop in solutions for this. This in and of itself is a fairly difficult project because of the scale needed to start as well as the effort put into verifying users are human. It's also a service that would have to be completly free to be accepted, yet cannot just shut down at risk of preventing further users from signing up. I considered perhaps charging instances a small fee (e.g. $1/mo) if they have over a certain threshold of users to allow issuing further certificates to their instance, but its the kind of thing I think would need to be decoupled from Lemmy to have a chance of surviving through more widespread use.

[–] Ajen@sh.itjust.works 1 points 2 years ago (1 children)

Interesting idea, but I don't think it would be practical to verify identities for a global community. If you've ever worked in a bar or other business that checks ID (and are from the US) you know how hard it is just to verify the identity of US citizens. If you're considering a global community, US and EU users would be the easiest to verify, and citizens of smaller countries would be much harder. How do you handle countries that have extremely corrupt governments, where it's easy to bribe an official for "real" documents for fictitious people?

[–] jjagaimo@lemmy.one 1 points 2 years ago* (last edited 2 years ago)

There are companies that do this with IDs but they are typically already global corporations or SSL certificate authorities already. One example is Verisign and another is Globalsign. Their products are unsuitable however because it connects your real identity to the account. It could be useful for a one time humanness verification though.

The main goal would be to decouple the humanness check from Lemmy and give it over to an authority meant just to create certificates which cannot be linked back to the person. You could probably rate limit each person after the human check for creating new certificate. This would allow creating alts but limit the number of bots one person could create, as theyd need to pass the automate the verification.

One issue would be trust because you would need to trust the authority saying that the person who created the certificate was human

[–] darkwing_duck@sh.itjust.works 1 points 2 years ago* (last edited 2 years ago) (1 children)

What the fuck happened to the internet? What happened to "never share your real name or any identifying information on the internet"?

some kind of digital identity organization(s) to be handling sensitive user data

Like Equifax? Excuse me if I am a little skeptical of "trusted" organizations handling my data.

[–] jjagaimo@lemmy.one 1 points 2 years ago* (last edited 2 years ago)

I literally addressed this. My point is that we'd need to give personal identifying information to be 100% sure, so the best way at the moment would instead be to just verify humanness as best as possible (e.g. better captcha, AI/chatgpt response detection, etc.) and shift the account sign up to the authority's side, accepting <100% unique individuals making accounts and prevent bots in other ways.

Also "trusted organizations handling your data" is exactly how 99% of the modern internet works. Rarely if ever do we give thought to the fact that companies like Verisign exist, nor that people regularly give credit card information to websites. At the same time, companies and corporations arent just some random schmuck spinning up their own authentication service