I’m assuming that they were more referring to the outage that occurred today that pulled a ton of the internet services, including signal offline temporarily.
You can have all the encryption in the world, but if the centralized data point that allows you to access the service is down, then you’re fucked.
no matter where you host, outages are going to happen… AWS really doesn’t have many… it’s just that it’s so big that everyone notices - it causes internet-wide issues
Monero isn’t like the other three, it’s P2P with no single points of failure.
I haven’t looked too closely at Nostr, but I’m assuming it’s typically federated with relays acting like Lemmy/Mastodon instances in terms of data storage (it’s a protocol, so I suppose posts could be local and switching relays is easy). If your instance goes down, you’re just as screwed as you would be with a centralized service, because Lemmy and Mastodon are centralized services that share data. If your instance doesn’t go down but a major one does, your experience will be significantly degraded.
The only way to really solve this problem is with P2P services, like Monero, or to have sufficient diversity in your infrastructure that a single major failure doesn’t kill the service. P2P is easy for something like a currency, but much more difficult for social media where you expect some amount of moderation, and redundancy is expensive and also complex.
Nostr is a weird being. You are correct that it is not peer-to-peer like Monero is. However, it’s not quite federated in the same way that ActivityPub is.
When using Nostr clients, you actually publish your same data to like six different relays at the same time. It has the built-in assumption that some of those relays are going to be down at any given time and so by publishing to like six at once you get data redundancy.
Ok, so it’s effectively the same as P2P, just with some guarantees about how many copies you have.
In a P2P setup, your data would be distributed based on some mathematical formula such that it’s statistically very unlikely that your data would be lost given N clients disconnect from the network. The larger the network, the more likely your data is to stick around. So think of bittorrent, but you are randomly selected to seed some number of files, in addition to files you explicitly opt into.
The risk w/ something like Nostr is if a lot of people pick the same relays, and those relays go down. With the P2P setup I described, data would be distributed according to a mathematical formula, not human decision, so you’re more likely to still have access to that data even if a whole country shuts off its internet or something.
Either solution is better than Lemmy/Mastodon or centralized services in terms of surviving something like AWS going down.
that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account
centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery
Right, but even if individual instances go down, you don’t end up with headlines all over the world of half the internet being down. Because half the internet isn’t down, the network is self-healing. It temporarily blocks off the problem area, and then when the instance comes back, it resynchronizes and continues as normal.
Services might be temporarily degraded, but not gone entirely.
but that’s a compromise… it’s not categorically better
you can’t run a bank like you run distributed instances, for example
services have different uptime requirements… this is perhaps the first time i’ve ever heard of signal having downtime, and the second time ever that i can remember there’s been a global AWS incident like this
and not only that, but lemmy and every service you listed aren’t even close to the scale of their centralised counterparts. we just aren’t there with the knowledge for how to build these services to simply say that centralised services are always worse, less reliable, etc. twitter is the usual example of this. it seems really easy, and arguably you can build a microblogging service in about 30min, but to scale it to the size that it handles is incredibly difficult and involves a lot of computer science (not just software engineering)
That was my point. But as somebody else pointed out here, the difficulties in maintaining the degree of security we currently enjoy as Signal users starts to get eroded away
I’m assuming that they were more referring to the outage that occurred today that pulled a ton of the internet services, including signal offline temporarily.
You can have all the encryption in the world, but if the centralized data point that allows you to access the service is down, then you’re fucked.
no matter where you host, outages are going to happen… AWS really doesn’t have many… it’s just that it’s so big that everyone notices - it causes internet-wide issues
Monero, Nostr, Lemmy, and Mastodon did not go down. Why? Because they are decentralized
Monero isn’t like the other three, it’s P2P with no single points of failure.
I haven’t looked too closely at Nostr, but I’m assuming it’s typically federated with relays acting like Lemmy/Mastodon instances in terms of data storage (it’s a protocol, so I suppose posts could be local and switching relays is easy). If your instance goes down, you’re just as screwed as you would be with a centralized service, because Lemmy and Mastodon are centralized services that share data. If your instance doesn’t go down but a major one does, your experience will be significantly degraded.
The only way to really solve this problem is with P2P services, like Monero, or to have sufficient diversity in your infrastructure that a single major failure doesn’t kill the service. P2P is easy for something like a currency, but much more difficult for social media where you expect some amount of moderation, and redundancy is expensive and also complex.
Nostr is a weird being. You are correct that it is not peer-to-peer like Monero is. However, it’s not quite federated in the same way that ActivityPub is.
When using Nostr clients, you actually publish your same data to like six different relays at the same time. It has the built-in assumption that some of those relays are going to be down at any given time and so by publishing to like six at once you get data redundancy.
Ok, so it’s effectively the same as P2P, just with some guarantees about how many copies you have.
In a P2P setup, your data would be distributed based on some mathematical formula such that it’s statistically very unlikely that your data would be lost given N clients disconnect from the network. The larger the network, the more likely your data is to stick around. So think of bittorrent, but you are randomly selected to seed some number of files, in addition to files you explicitly opt into.
The risk w/ something like Nostr is if a lot of people pick the same relays, and those relays go down. With the P2P setup I described, data would be distributed according to a mathematical formula, not human decision, so you’re more likely to still have access to that data even if a whole country shuts off its internet or something.
Either solution is better than Lemmy/Mastodon or centralized services in terms of surviving something like AWS going down.
Come on, mate… Lemmy as a whole didn’t go down, but instances of Lemmy absolutely did go down. As they regularly do, because shit happens.
that’s pretty disingenuous though… individual lemmy instances go down or have issues regularly… they’re different, but not necessarily worse in the case of stability… robustness of the system as a whole there’s perhaps an argument in favour of distributed, but the system as a whole isn’t a particularly helpful argument when you’re trying to access your specific account
centralised services are just inherently more stable for the same type of workload because they tend to be less complex, less networking interconnectedness to cause issues, and you can focus a lot more energy building out automation and recovery than spending energy repeatedly building the same things… that energy is distributed, but again it’s still human effort: centralised systems are likely to be more stable because they’ve had significantly more work put into stability, detection, and recovery
Right, but even if individual instances go down, you don’t end up with headlines all over the world of half the internet being down. Because half the internet isn’t down, the network is self-healing. It temporarily blocks off the problem area, and then when the instance comes back, it resynchronizes and continues as normal.
Services might be temporarily degraded, but not gone entirely.
but that’s a compromise… it’s not categorically better
you can’t run a bank like you run distributed instances, for example
services have different uptime requirements… this is perhaps the first time i’ve ever heard of signal having downtime, and the second time ever that i can remember there’s been a global AWS incident like this
and not only that, but lemmy and every service you listed aren’t even close to the scale of their centralised counterparts. we just aren’t there with the knowledge for how to build these services to simply say that centralised services are always worse, less reliable, etc. twitter is the usual example of this. it seems really easy, and arguably you can build a microblogging service in about 30min, but to scale it to the size that it handles is incredibly difficult and involves a lot of computer science (not just software engineering)
That was my point. But as somebody else pointed out here, the difficulties in maintaining the degree of security we currently enjoy as Signal users starts to get eroded away