• 1 Post
  • 41 Comments
Joined 2 years ago
cake
Cake day: June 14th, 2023

help-circle

  • Aha I see you did the text-based install then? I’ve never done that myself but I just tried it now and it worked fine for me with the default password it mentions. Make sure caps lock is off. You will not be able to see the password when you type it, so be extra careful you are typing it correctly.

    Most of the same cautions about internet access still apply, if your networking is active on this VM there’s a non-zero chance you can get hacked right away when you’re in default passwords/initial setup mode. If you continue to have trouble getting in, you should reinstall it once again onto a fresh VM with network mode set to NAT if possible, or even disabled completely, and see if it works in that configuration. It really is critical to get the password set up before opening up the internet.


  • cecilkorik@lemmy.catoSelfhosted@lemmy.worldWhat do I do -- Incorrect?
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    2 days ago

    Not sure what you mean by “what was provided”… who is providing a username and password for your yunohost?

    You are supposed to create your own username and password during the “Begin” setup process after it first installs. “root” and “yunohost” are very insecure and if you use passwords that are copy/pasted from somewhere else on a machine connected to the internet it will be hacked, potentially almost immediately. People have bots that literally just try to connect using these common default passwords all day every day to every site on the internet. I have literally had machines with such crappy passwords hacked within minutes of spinning them up. The same thing can happen even when you are first doing the setup process. If somebody else can get in, they can (most likely with a bot) do the setup process themselves and set up their OWN username/password, and now it will ask you for that password that THEY set, which you have no way of knowing. The instance belongs to the first person to claim it, and if that’s not you, you have to wipe it and start over.

    Your yunohost VM interface should not be exposed to the internet during setup. Even briefly, or someone else can immediately compromise it like this. The only way to ensure you are the first person to access it is to make sure you are the ONLY person who can access it, until it is properly set up and secured. Bots are WAY faster than you can be.

    Use localhost console, VM port forwarding or some other secure method of making sure nobody but your own host computer can access the IP of the server where you are setting things up, until it has a strong, secure password (not “yunohost”) and make sure you have all its security features configured and working before you even think about making it accessible to the internet.






  • The “Unhook” addon (increasingly required for Youtube now, in my opinion) will still completely block this as it blocks all shorts. Fuck shorts anyway. Also as TechnologyConnections pointed out in a recent video, the subscription feed still and always has completely bypasses Youtube’s recommended brainrot anyway and allows you to subscribe to and follow the creators and topics you actually care about. Until we have a viable alternative to Youtube (and hopefully stuff like this will drive that to happen sooner rather than later) the other option is to stick to subscriptions as much as possible and only subscribe to creators that don’t abuse this or use shorts at all, preferably.


  • For RAID that’s pretty much it as far as I know, but I’m pretty sure it can be a lot simpler and more flexible using some of these newfangled filesystems that are out nowadays like LVM and ZFS and maybe BTRFS? I can’t pretend I’m super up to date on all the latest technologies, I know they can do some really incredible stuff though. I’m not familiar enough to recommend it, but it might be worth looking into what they can do for you if your NAS supports it. From what I understand they don’t use RAID at all, although they might be able to simulate it, instead they treat disks as JBOD (just a bunch of disks) and use their own strategies to spread whole filesystems and partition structures across them in various safe and redundant ways that are way more flexible, that don’t care about disk size or anything like that, they’ll handle any shapes and sizes and I think they can be expanded and contracted pretty freely. I think ZFS in particular is really heavily used for this and supports some crazy complicated structures.


  • At the end of the day it doesn’t matter so much if they’re in 2x 2 bays or 1x 4 bay that’s backing itself up. It might give a little extra redundancy and safety to have them on separate NAS but the backup software is what’s going to be doing the heavy lifting here and it shouldn’t really matter whether it’s talking to two different disks/arrays on the same machine/NAS (as long as the NAS allows you to split the 4 drives into 2 different arrays which from my experience they do)


  • I don’t know what kind of data this is but when you say the whole household’s data is going to be on it, I want to take a moment to point out that while RAID1 is redundant, it is NOT a backup. Both drives will happily delete, overwrite, corrupt, or encrypt all your data as quickly as you can blink the moment they believe something has told them to, and will both do it simultaneously to both “redundant” copies of your data. It also won’t help if your powersupply blows up and nukes both drives at once. It only guards against individual hardware failure of a single disk, nothing else. While that failure mode is quite common (and using RAID actually increases the risk of it) it’s important to remember that it’s also not the only cause of data loss.

    If any of this data is important and irreplaceable, consider whether you’d be better off spending your additional future budget setting up another pair of drives to maintain continuous backups. There are a variety of simple tools that can create incremental, time-machine-like backups from hard-drive based storage to other hard-drive based storage while using a minimal amount of additional space (I use this rock-solid script based on rsync but literally there are dozens of backup tools that do almost exactly the same thing, often using rsync under the hood themselves). This still won’t help you if say, your house burns down with both drive arrays inside it, but it’s an improvement over a single huge RAID NAS and gives you the option to roll back from a known-good snapshot or restore a file that was deleted or corrupted long ago and you never noticed.

    To answer your original question, it generally isn’t possible to do what you’re asking. You might be able to get away with starting the RAID array as RAID1+0 and pretending that half the drives (the RAID1 mirror side) have failed, but that will mean your two existing disks are running in RAID0 striping mode with no RAID1 mirrors, and a failure of EITHER one will lose all your data until you get the second two drives installed. And that’s super sketchy and would be tricky to even set up. You cannot run a RAID1+0 with only two drives in mirror mode because they’ll both be missing their striped RAID0 volume. In fact, if this happens on a live array, you lose the whole array in that case too. Despite having 4 drives, RAID1+0 is technically still only singly-redundant. Any single failure can be tolerated, but two failures can make the whole array unrecoverable if they happen to be the wrong two failures (both failures from the same stripe, leaving only two working RAID1 mirrors of the other stripe), and due to striping it really is unrecoverable. Only small chunks of each file will be available on the surviving RAID1 mirrors.

    In almost all cases, changing the geometry of the array means rebuilding it from scratch, and you usually need some form of temporary storage to be able to do that. The good news is, if you decide to add 2 drives to an existing 2 drive RAID1 setup, you have 4 drives, each 4TB. and you cannot possibly have more than 4TB of data because your existing two drives are RAID1 and only have 4TB capacity between them. You can probably use 3 of those drives to set up a 4-drive RAID 1+0 with a missing drive, after copying all the data from your RAID1 array onto drive #4 temporarily. Then once the 3-drive array is up, copy it back onto the NAS array. Finally, you can slot drive #4 into the NAS as well, treating it as a “new” drive to replace the “failed” one, and the array should sync over all the stripes it needs and bring it into the array properly. This is all definitely possible with Linux’s built-in software RAID tools (I’ve done stupider things) however whether your specific NAS box will let you to do this successfully is something I can’t promise.

    It’s important to keep in mind this is all sketchy as hell (remember what I said about backups and asking whether this data was irreplaceable? yeah. don’t stop thinking about that), but technically it should work.

    Edit to add: Another perspective is, once you get your 2 additional drives, you can turn your NAS drive + backup drive into two RAID0s to extend them. A pair of 4 TB RAID0 drives gives you the 8 TB of storage you ultimately want. A second pair of RAID0 drives gives you 8 TB that you can use to make regular backups of the primary RAID0. Again you need to do some array rebuilding, but this time you have an already-existing backup so you don’t even have to worry about dancing around creating initially-broken arrays. Yes the risk of a RAID0 failure taking down one of the arrays is much higher, but that’s what your backups are for. If a single drive fails, you either lose the primary array (sucks, but you still have all your backups on the other RAID0 safe and sound) or you lose the backup (not a big deal because the primary’s still happy and healthy, and once you fix the backup array you can start making new backups again). Either way, you’re now relying on an actual backup strategy to ensure your data is safe instead of relying on RAID1, which is not a backup. The only thing you lose is the the continuous uptime if you do have a failure in the primary array, and the ability of RAID1 to read from both arrays at once and theoretically increase the read speed. But the advantages of gaining a proper scheduled backup outweigh that in my opinion.


  • It is, but it’s also necessary sometimes. If governments didn’t have any power and could just be ignored or openly defied without consequences, we wouldn’t have to care about what they want to censor. But they do have power, despite all our wishing that they didn’t, and we can’t organize a resistance to them without careful maneuvering and sometimes at least making an appearance of playing by their rules. Government censorship you can unsubscribe from is objectively better than censorship you can’t. Don’t let perfect be the enemy of good.



  • It’s not Peertube, but as at least a step away from Youtube I’ve found a lot of my favourite creators immediately cross-post all their videos to Odysee (including electronics guys like Louis, Bigclive, GreatScott, etc) and I’ve also found some new channels to watch there. It’s not a great site, it’s marginally better than Youtube, which is not a high bar. For obvious reasons, I’m looking forward to finding recommendations in Peertube too though so I’ll be watching this thread.




  • It’s mostly a relic from an older time, it can be useful for more traditional services and situations that struggle with sharing public IPs. In theory, things like multiple IP addresses (and IPv6’s near unlimited addresses) could be used to make things simpler – you don’t need reverse proxies and NAT and port forwarding (all of which were once viewed as excessive complexity if not outright ugly hacks instead of the virtual necessity they are today).

    Each service would have its own dedicated public IP, you’d connect them up with IP routing the way the kernel gods intended, and everything would be straightforward, clear, and happy. If such a quantity of IPs were freely available, this would indeed be a simpler life in many ways. And yet it’s such a distant fantasy now that it’s understandable (though a little funny) to hear you describe it as “additional complexity” when, depending on how you look at it, the opposite is true…

    From a modern perspective, you’re absolutely right. The tables have really been turned, we have taken the limitation of IP addresses in stride, we have built elaborate systems of tools and layers of abstraction that not only turn these IP-shortage lemons into lemonade, the way we’ve virtualized the connections through featureful and easily-configurable software layers like private IP ranges, IP masquerading, proxies and tunnels can be used to achieve immense flexibility and reliable security. Most software now natively supports handling multiple services on a single IP or even a single port, and in some cases it requires it. This was not always the case.

    It’s sort of like the divide between hardware RAID and software RAID. Once upon a time, software RAID was slow, messy, confusing, unreliable, and distinctly inferior to “true” hardware RAID, which was plug-and-play with powerful configuration options. Nobody would willingly use software RAID if they had any other choice, the best RAID cards were sold for thousands of dollars and motherboards advertised how much hardware RAID they had built-in. But over time, as CPUs and software became faster and more powerful, the tide changed, and people started to realize that actually, hardware RAID was the one that left you tied to an expensive proprietary controller that could fail or become obsolete and leave your array a difficult to migrate or recover mess, whereas software RAID was reliable, predictable, upgradable, supporting a wide variety of disk types and layouts while still performing solidly and was generally far nicer to work with. It became the more common configuration, and found its way into almost every OS. You can now set up software RAID simply by clicking an option in a menu, even in Windows, and it basically works flawlessly without any additional thought.

    Times change, we adapt to the technologies that are most common and that work the best in the situations we’re using them in, and we make them better until they’re not just a last resort anymore, but become a first choice, while the old way becomes a confusing anachronism. That’s what multiple public IPs have become nowadays, for most purposes.




  • The whole point of federation and open protocols is that you aren’t tied to any specific piece of software, or any single provider, or any single set of features. People can experiment and innovate and collaborate and expand to build new things on top without losing access or interfering with people who prefer the old methods. People or software that abuses the system on the other hand, can be blocked or defederated.

    A healthy software ecosystem should have many different pieces of software all written by different people with different goals, but all implementing most of the same things. Some will be more popular than others, and the popular ones might not agree with your own personal tastes, but that’s just life. The point is that we (and software developers) all have the freedom to choose how we interact with this system without any formal rules or maintainer group deciding what is allowed and what isn’t (except within their own software and/or instance).

    and they will be cross compatible enough that it won’t be much of a deal what project is running underneath?

    They are already cross-compatible enough, they are as cross-compatible as they need to be. It’s not clear what more you could ask for. If you want them to all look and work exactly the same then what’s the point of having different software at all? You’re acting like the different features and choices are a downside when it is in fact a benefit. Pick the one you like the most and use it. If you like Piefed’s hashtags, then use Piefed, it’s great! There’s nothing “locked away” in Piefed, everything in it is available to everybody, as it should be!