#difference between linux and windows operating systems
Explore tagged Tumblr posts
hopefulkittenconnoisseur · 4 months ago
Text
Tumblr media
0 notes
donjuaninhell · 1 year ago
Text
How I ditched streaming services and learned to love Linux: A step-by-step guide to building your very own personal media streaming server (V2.0: REVISED AND EXPANDED EDITION)
This is a revised, corrected and expanded version of my tutorial on setting up a personal media server that previously appeared on my old blog (donjuan-auxenfers). I expect that that post is still making the rounds (hopefully with my addendum on modifying group share permissions in Ubuntu to circumvent 0x8007003B "Unexpected Network Error" messages in Windows 10/11 when transferring files) but I have no way of checking. Anyway this new revised version of the tutorial corrects one or two small errors I discovered when rereading what I wrote, adds links to all products mentioned and is just more polished generally. I also expanded it a bit, pointing more adventurous users toward programs such as Sonarr/Radarr/Lidarr and Overseerr which can be used for automating user requests and media collection.
So then, what is this tutorial? This is a tutorial on how to build and set up your own personal media server using Ubuntu as an operating system and Plex (or Jellyfin) to not only manage your media, but to also stream that media to your devices both at home and abroad anywhere in the world where you have an internet connection. Its intent is to show you how building a personal media server and stuffing it full of films, TV, and music that you acquired through indiscriminate and voracious media piracy various legal methods will free you to completely ditch paid streaming services. No more will you have to pay for Disney+, Netflix, HBOMAX, Hulu, Amazon Prime, Peacock, CBS All Access, Paramount+, Crave or any other streaming service that is not named Criterion Channel. Instead whenever you want to watch your favourite films and television shows, you’ll have your own personal service that only features things that you want to see, with files that you have control over. And for music fans out there, both Jellyfin and Plex support music streaming, meaning you can even ditch music streaming services. Goodbye Spotify, Youtube Music, Tidal and Apple Music, welcome back unreasonably large MP3 (or FLAC) collections.
On the hardware front, I’m going to offer a few options catered towards different budgets and media library sizes. The cost of getting a media server up and running using this guide will cost you anywhere from $450 CAD/$325 USD at the low end to $1500 CAD/$1100 USD at the high end (it could go higher). My server was priced closer to the higher figure, but I went and got a lot more storage than most people need. If that seems like a little much, consider for a moment, do you have a roommate, a close friend, or a family member who would be willing to chip in a few bucks towards your little project provided they get access? Well that's how I funded my server. It might also be worth thinking about the cost over time, i.e. how much you spend yearly on subscriptions vs. a one time cost of setting up a server. Additionally there's just the joy of being able to scream "fuck you" at all those show cancelling, library deleting, hedge fund vampire CEOs who run the studios through denying them your money. Drive a stake through David Zaslav's heart.
On the software side I will walk you step-by-step through installing Ubuntu as your server's operating system, configuring your storage as a RAIDz array with ZFS, sharing your zpool to Windows with Samba, running a remote connection between your server and your Windows PC, and then a little about started with Plex/Jellyfin. Every terminal command you will need to input will be provided, and I even share a custom #bash script that will make used vs. available drive space on your server display correctly in Windows.
If you have a different preferred flavour of Linux (Arch, Manjaro, Redhat, Fedora, Mint, OpenSUSE, CentOS, Slackware etc. et. al.) and are aching to tell me off for being basic and using Ubuntu, this tutorial is not for you. The sort of person with a preferred Linux distro is the sort of person who can do this sort of thing in their sleep. Also I don't care. This tutorial is intended for the average home computer user. This is also why we’re not using a more exotic home server solution like running everything through Docker Containers and managing it through a dashboard like Homarr or Heimdall. While such solutions are fantastic and can be very easy to maintain once you have it all set up, wrapping your brain around Docker is a whole thing in and of itself. If you do follow this tutorial and had fun putting everything together, then I would encourage you to return in a year’s time, do your research and set up everything with Docker Containers.
Lastly, this is a tutorial aimed at Windows users. Although I was a daily user of OS X for many years (roughly 2008-2023) and I've dabbled quite a bit with various Linux distributions (mostly Ubuntu and Manjaro), my primary OS these days is Windows 11. Many things in this tutorial will still be applicable to Mac users, but others (e.g. setting up shares) you will have to look up for yourself. I doubt it would be difficult to do so.
Nothing in this tutorial will require feats of computing expertise. All you will need is a basic computer literacy (i.e. an understanding of what a filesystem and directory are, and a degree of comfort in the settings menu) and a willingness to learn a thing or two. While this guide may look overwhelming at first glance, it is only because I want to be as thorough as possible. I want you to understand exactly what it is you're doing, I don't want you to just blindly follow steps. If you half-way know what you’re doing, you will be much better prepared if you ever need to troubleshoot.
Honestly, once you have all the hardware ready it shouldn't take more than an afternoon or two to get everything up and running.
(This tutorial is just shy of seven thousand words long so the rest is under the cut.)
Step One: Choosing Your Hardware
Linux is a light weight operating system, depending on the distribution there's close to no bloat. There are recent distributions available at this very moment that will run perfectly fine on a fourteen year old i3 with 4GB of RAM. Moreover, running Plex or Jellyfin isn’t resource intensive in 90% of use cases. All this is to say, we don’t require an expensive or powerful computer. This means that there are several options available: 1) use an old computer you already have sitting around but aren't using 2) buy a used workstation from eBay, or what I believe to be the best option, 3) order an N100 Mini-PC from AliExpress or Amazon.
Note: If you already have an old PC sitting around that you’ve decided to use, fantastic, move on to the next step.
When weighing your options, keep a few things in mind: the number of people you expect to be streaming simultaneously at any one time, the resolution and bitrate of your media library (4k video takes a lot more processing power than 1080p) and most importantly, how many of those clients are going to be transcoding at any one time. Transcoding is what happens when the playback device does not natively support direct playback of the source file. This can happen for a number of reasons, such as the playback device's native resolution being lower than the file's internal resolution, or because the source file was encoded in a video codec unsupported by the playback device.
Ideally we want any transcoding to be performed by hardware. This means we should be looking for a computer with an Intel processor with Quick Sync. Quick Sync is a dedicated core on the CPU die designed specifically for video encoding and decoding. This specialized hardware makes for highly efficient transcoding both in terms of processing overhead and power draw. Without these Quick Sync cores, transcoding must be brute forced through software. This takes up much more of a CPU’s processing power and requires much more energy. But not all Quick Sync cores are created equal and you need to keep this in mind if you've decided either to use an old computer or to shop for a used workstation on eBay
Any Intel processor from second generation Core (Sandy Bridge circa 2011) onward has Quick Sync cores. It's not until 6th gen (Skylake), however, that the cores support the H.265 HEVC codec. Intel’s 10th gen (Comet Lake) processors introduce support for 10bit HEVC and HDR tone mapping. And the recent 12th gen (Alder Lake) processors brought with them hardware AV1 decoding. As an example, while an 8th gen (Kaby Lake) i5-8500 will be able to hardware transcode a H.265 encoded file, it will fall back to software transcoding if given a 10bit H.265 file. If you’ve decided to use that old PC or to look on eBay for an old Dell Optiplex keep this in mind.
Note 1: The price of old workstations varies wildly and fluctuates frequently. If you get lucky and go shopping shortly after a workplace has liquidated a large number of their workstations you can find deals for as low as $100 on a barebones system, but generally an i5-8500 workstation with 16gb RAM will cost you somewhere in the area of $260 CAD/$200 USD.
Note 2: The AMD equivalent to Quick Sync is called Video Core Next, and while it's fine, it's not as efficient and not as mature a technology. It was only introduced with the first generation Ryzen CPUs and it only got decent with their newest CPUs, we want something cheap.
Alternatively you could forgo having to keep track of what generation of CPU is equipped with Quick Sync cores that feature support for which codecs, and just buy an N100 mini-PC. For around the same price or less of a used workstation you can pick up a mini-PC with an Intel N100 processor. The N100 is a four-core processor based on the 12th gen Alder Lake architecture and comes equipped with the latest revision of the Quick Sync cores. These little processors offer astounding hardware transcoding capabilities for their size and power draw. Otherwise they perform equivalent to an i5-6500, which isn't a terrible CPU. A friend of mine uses an N100 machine as a dedicated retro emulation gaming system and it does everything up to 6th generation consoles just fine. The N100 is also a remarkably efficient chip, it sips power. In fact, the difference between running one of these and an old workstation could work out to hundreds of dollars a year in energy bills depending on where you live.
You can find these Mini-PCs all over Amazon or for a little cheaper on AliExpress. They range in price from $170 CAD/$125 USD for a no name N100 with 8GB RAM to $280 CAD/$200 USD for a Beelink S12 Pro with 16GB RAM. The brand doesn't really matter, they're all coming from the same three factories in Shenzen, go for whichever one fits your budget or has features you want. 8GB RAM should be enough, Linux is lightweight and Plex only calls for 2GB RAM. 16GB RAM might result in a slightly snappier experience, especially with ZFS. A 256GB SSD is more than enough for what we need as a boot drive, but going for a bigger drive might allow you to get away with things like creating preview thumbnails for Plex, but it’s up to you and your budget.
The Mini-PC I wound up buying was a Firebat AK2 Plus with 8GB RAM and a 256GB SSD. It looks like this:
Tumblr media
Note: Be forewarned that if you decide to order a Mini-PC from AliExpress, note the type of power adapter it ships with. The mini-PC I bought came with an EU power adapter and I had to supply my own North American power supply. Thankfully this is a minor issue as barrel plug 30W/12V/2.5A power adapters are easy to find and can be had for $10.
Step Two: Choosing Your Storage
Storage is the most important part of our build. It is also the most expensive. Thankfully it’s also the most easily upgrade-able down the line.
For people with a smaller media collection (4TB to 8TB), a more limited budget, or who will only ever have two simultaneous streams running, I would say that the most economical course of action would be to buy a USB 3.0 8TB external HDD. Something like this one from Western Digital or this one from Seagate. One of these external drives will cost you in the area of $200 CAD/$140 USD. Down the line you could add a second external drive or replace it with a multi-drive RAIDz set up such as detailed below.
If a single external drive the path for you, move on to step three.
For people with larger media libraries (12TB+), who prefer media in 4k, or care who about data redundancy, the answer is a RAID array featuring multiple HDDs in an enclosure.
Note: If you are using an old PC or used workstatiom as your server and have the room for at least three 3.5" drives, and as many open SATA ports on your mother board you won't need an enclosure, just install the drives into the case. If your old computer is a laptop or doesn’t have room for more internal drives, then I would suggest an enclosure.
The minimum number of drives needed to run a RAIDz array is three, and seeing as RAIDz is what we will be using, you should be looking for an enclosure with three to five bays. I think that four disks makes for a good compromise for a home server. Regardless of whether you go for a three, four, or five bay enclosure, do be aware that in a RAIDz array the space equivalent of one of the drives will be dedicated to parity at a ratio expressed by the equation 1 − 1/n i.e. in a four bay enclosure equipped with four 12TB drives, if we configured our drives in a RAIDz1 array we would be left with a total of 36TB of usable space (48TB raw size). The reason for why we might sacrifice storage space in such a manner will be explained in the next section.
A four bay enclosure will cost somewhere in the area of $200 CDN/$140 USD. You don't need anything fancy, we don't need anything with hardware RAID controls (RAIDz is done entirely in software) or even USB-C. An enclosure with USB 3.0 will perform perfectly fine. Don’t worry too much about USB speed bottlenecks. A mechanical HDD will be limited by the speed of its mechanism long before before it will be limited by the speed of a USB connection. I've seen decent looking enclosures from TerraMaster, Yottamaster, Mediasonic and Sabrent.
When it comes to selecting the drives, as of this writing, the best value (dollar per gigabyte) are those in the range of 12TB to 20TB. I settled on 12TB drives myself. If 12TB to 20TB drives are out of your budget, go with what you can afford, or look into refurbished drives. I'm not sold on the idea of refurbished drives but many people swear by them.
When shopping for harddrives, search for drives designed specifically for NAS use. Drives designed for NAS use typically have better vibration dampening and are designed to be active 24/7. They will also often make use of CMR (conventional magnetic recording) as opposed to SMR (shingled magnetic recording). This nets them a sizable read/write performance bump over typical desktop drives. Seagate Ironwolf and Toshiba NAS are both well regarded brands when it comes to NAS drives. I would avoid Western Digital Red drives at this time. WD Reds were a go to recommendation up until earlier this year when it was revealed that they feature firmware that will throw up false SMART warnings telling you to replace the drive at the three year mark quite often when there is nothing at all wrong with that drive. It will likely even be good for another six, seven, or more years.
Tumblr media
Step Three: Installing Linux
For this step you will need a USB thumbdrive of at least 6GB in capacity, an .ISO of Ubuntu, and a way to make that thumbdrive bootable media.
First download a copy of Ubuntu desktop (for best performance we could download the Server release, but for new Linux users I would recommend against the server release. The server release is strictly command line interface only, and having a GUI is very helpful for most people. Not many people are wholly comfortable doing everything through the command line, I'm certainly not one of them, and I grew up with DOS 6.0. 22.04.3 Jammy Jellyfish is the current Long Term Service release, this is the one to get.
Download the .ISO and then download and install balenaEtcher on your Windows PC. BalenaEtcher is an easy to use program for creating bootable media, you simply insert your thumbdrive, select the .ISO you just downloaded, and it will create a bootable installation media for you.
Once you've made a bootable media and you've got your Mini-PC (or you old PC/used workstation) in front of you, hook it directly into your router with an ethernet cable, and then plug in the HDD enclosure, a monitor, a mouse and a keyboard. Now turn that sucker on and hit whatever key gets you into the BIOS (typically ESC, DEL or F2). If you’re using a Mini-PC check to make sure that the P1 and P2 power limits are set correctly, my N100's P1 limit was set at 10W, a full 20W under the chip's power limit. Also make sure that the RAM is running at the advertised speed. My Mini-PC’s RAM was set at 2333Mhz out of the box when it should have been 3200Mhz. Once you’ve done that, key over to the boot order and place the USB drive first in the boot order. Then save the BIOS settings and restart.
After you restart you’ll be greeted by Ubuntu's installation screen. Installing Ubuntu is really straight forward, select the "minimal" installation option, as we won't need anything on this computer except for a browser (Ubuntu comes preinstalled with Firefox) and Plex Media Server/Jellyfin Media Server. Also remember to delete and reformat that Windows partition! We don't need it.
Step Four: Installing ZFS and Setting Up the RAIDz Array
Note: If you opted for just a single external HDD skip this step and move onto setting up a Samba share.
Once Ubuntu is installed it's time to configure our storage by installing ZFS to build our RAIDz array. ZFS is a "next-gen" file system that is both massively flexible and massively complex. It's capable of snapshot backup, self healing error correction, ZFS pools can be configured with drives operating in a supplemental manner alongside the storage vdev (e.g. fast cache, dedicated secondary intent log, hot swap spares etc.). It's also a file system very amenable to fine tuning. Block and sector size are adjustable to use case and you're afforded the option of different methods of inline compression. If you'd like a very detailed overview and explanation of its various features and tips on tuning a ZFS array check out these articles from Ars Technica. For now we're going to ignore all these features and keep it simple, we're going to pull our drives together into a single vdev running in RAIDz which will be the entirety of our zpool, no fancy cache drive or SLOG.
Open up the terminal and type the following commands:
sudo apt update
then
sudo apt install zfsutils-linux
This will install the ZFS utility. Verify that it's installed with the following command:
zfs --version
Now, it's time to check that the HDDs we have in the enclosure are healthy, running, and recognized. We also want to find out their device IDs and take note of them:
sudo fdisk -1
Note: You might be wondering why some of these commands require "sudo" in front of them while others don't. "Sudo" is short for "super user do”. When and where "sudo" is used has to do with the way permissions are set up in Linux. Only the "root" user has the access level to perform certain tasks in Linux. As a matter of security and safety regular user accounts are kept separate from the "root" user. It's not advised (or even possible) to boot into Linux as "root" with most modern distributions. Instead by using "sudo" our regular user account is temporarily given the power to do otherwise forbidden things. Don't worry about it too much at this stage, but if you want to know more check out this introduction.
If everything is working you should get a list of the various drives detected along with their device IDs which will look like this: /dev/sdc. You can also check the device IDs of the drives by opening the disk utility app. Jot these IDs down as we'll need them for our next step, creating our RAIDz array.
RAIDz is similar to RAID-5 in that instead of striping your data over multiple disks, exchanging redundancy for speed and available space (RAID-0), or mirroring your data writing by two copies of every piece (RAID-1), it instead writes parity blocks across the disks in addition to striping, this provides a balance of speed, redundancy and available space. If a single drive fails, the parity blocks on the working drives can be used to reconstruct the entire array as soon as a replacement drive is added.
Additionally, RAIDz improves over some of the common RAID-5 flaws. It's more resilient and capable of self healing, as it is capable of automatically checking for errors against a checksum. It's more forgiving in this way, and it's likely that you'll be able to detect when a drive is dying well before it fails. A RAIDz array can survive the loss of any one drive.
Note: While RAIDz is indeed resilient, if a second drive fails during the rebuild, you're fucked. Always keep backups of things you can't afford to lose. This tutorial, however, is not about proper data safety.
To create the pool, use the following command:
sudo zpool create "zpoolnamehere" raidz "device IDs of drives we're putting in the pool"
For example, let's creatively name our zpool "mypool". This poil will consist of four drives which have the device IDs: sdb, sdc, sdd, and sde. The resulting command will look like this:
sudo zpool create mypool raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde
If as an example you bought five HDDs and decided you wanted more redundancy dedicating two drive to this purpose, we would modify the command to "raidz2" and the command would look something like the following:
sudo zpool create mypool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
An array configured like this is known as RAIDz2 and is able to survive two disk failures.
Once the zpool has been created, we can check its status with the command:
zpool status
Or more concisely with:
zpool list
The nice thing about ZFS as a file system is that a pool is ready to go immediately after creation. If we were to set up a traditional RAID-5 array using mbam, we'd have to sit through a potentially hours long process of reformatting and partitioning the drives. Instead we're ready to go right out the gates.
The zpool should be automatically mounted to the filesystem after creation, check on that with the following:
df -hT | grep zfs
Note: If your computer ever loses power suddenly, say in event of a power outage, you may have to re-import your pool. In most cases, ZFS will automatically import and mount your pool, but if it doesn’t and you can't see your array, simply open the terminal and type sudo zpool import -a.
By default a zpool is mounted at /"zpoolname". The pool should be under our ownership but let's make sure with the following command:
sudo chown -R "yourlinuxusername" /"zpoolname"
Note: Changing file and folder ownership with "chown" and file and folder permissions with "chmod" are essential commands for much of the admin work in Linux, but we won't be dealing with them extensively in this guide. If you'd like a deeper tutorial and explanation you can check out these two guides: chown and chmod.
Tumblr media
You can access the zpool file system through the GUI by opening the file manager (the Ubuntu default file manager is called Nautilus) and clicking on "Other Locations" on the sidebar, then entering the Ubuntu file system and looking for a folder with your pool's name. Bookmark the folder on the sidebar for easy access.
Tumblr media
Your storage pool is now ready to go. Assuming that we already have some files on our Windows PC we want to copy to over, we're going to need to install and configure Samba to make the pool accessible in Windows.
Step Five: Setting Up Samba/Sharing
Samba is what's going to let us share the zpool with Windows and allow us to write to it from our Windows machine. First let's install Samba with the following commands:
sudo apt-get update
then
sudo apt-get install samba
Next create a password for Samba.
sudo smbpswd -a "yourlinuxusername"
It will then prompt you to create a password. Just reuse your Ubuntu user password for simplicity's sake.
Note: if you're using just a single external drive replace the zpool location in the following commands with wherever it is your external drive is mounted, for more information see this guide on mounting an external drive in Ubuntu.
After you've created a password we're going to create a shareable folder in our pool with this command
mkdir /"zpoolname"/"foldername"
Now we're going to open the smb.conf file and make that folder shareable. Enter the following command.
sudo nano /etc/samba/smb.conf
This will open the .conf file in nano, the terminal text editor program. Now at the end of smb.conf add the following entry:
["foldername"]
path = /"zpoolname"/"foldername"
available = yes
valid users = "yourlinuxusername"
read only = no
writable = yes
browseable = yes
guest ok = no
Ensure that there are no line breaks between the lines and that there's a space on both sides of the equals sign. Our next step is to allow Samba traffic through the firewall:
sudo ufw allow samba
Finally restart the Samba service:
sudo systemctl restart smbd
At this point we'll be able to access to the pool, browse its contents, and read and write to it from Windows. But there's one more thing left to do, Windows doesn't natively support the ZFS file systems and will read the used/available/total space in the pool incorrectly. Windows will read available space as total drive space, and all used space as null. This leads to Windows only displaying a dwindling amount of "available" space as the drives are filled. We can fix this! Functionally this doesn't actually matter, we can still write and read to and from the disk, it just makes it difficult to tell at a glance the proportion of used/available space, so this is an optional step but one I recommend (this step is also unnecessary if you're just using a single external drive). What we're going to do is write a little shell script in #bash. Open nano with the terminal with the command:
nano
Now insert the following code:
#!/bin/bash CUR_PATH=`pwd` ZFS_CHECK_OUTPUT=$(zfs get type $CUR_PATH 2>&1 > /dev/null) > /dev/null if [[ $ZFS_CHECK_OUTPUT == *not\ a\ ZFS* ]] then IS_ZFS=false else IS_ZFS=true fi if [[ $IS_ZFS = false ]] then df $CUR_PATH | tail -1 | awk '{print $2" "$4}' else USED=$((`zfs get -o value -Hp used $CUR_PATH` / 1024)) > /dev/null AVAIL=$((`zfs get -o value -Hp available $CUR_PATH` / 1024)) > /dev/null TOTAL=$(($USED+$AVAIL)) > /dev/null echo $TOTAL $AVAIL fi
Save the script as "dfree.sh" to /home/"yourlinuxusername" then change the ownership of the file to make it executable with this command:
sudo chmod 774 dfree.sh
Now open smb.conf with sudo again:
sudo nano /etc/samba/smb.conf
Now add this entry to the top of the configuration file to direct Samba to use the results of our script when Windows asks for a reading on the pool's used/available/total drive space:
[global]
dfree command = /home/"yourlinuxusername"/dfree.sh
Save the changes to smb.conf and then restart Samba again with the terminal:
sudo systemctl restart smbd
Now there’s one more thing we need to do to fully set up the Samba share, and that’s to modify a hidden group permission. In the terminal window type the following command:
usermod -a -G sambashare “yourlinuxusername”
Then restart samba again:
sudo systemctl restart smbd
If we don’t do this last step, everything will appear to work fine, and you will even be able to see and map the drive from Windows and even begin transferring files, but you'd soon run into a lot of frustration. As every ten minutes or so a file would fail to transfer and you would get a window announcing “0x8007003B Unexpected Network Error”. This window would require your manual input to continue the transfer with the file next in the queue. And at the end it would reattempt to transfer whichever files failed the first time around. 99% of the time they’ll go through that second try, but this is still all a major pain in the ass. Especially if you’ve got a lot of data to transfer or you want to step away from the computer for a while.
It turns out samba can act a little weirdly with the higher read/write speeds of RAIDz arrays and transfers from Windows, and will intermittently crash and restart itself if this group option isn’t changed. Inputting the above command will prevent you from ever seeing that window.
The last thing we're going to do before switching over to our Windows PC is grab the IP address of our Linux machine. Enter the following command:
hostname -I
This will spit out this computer's IP address on the local network (it will look something like 192.168.0.x), write it down. It might be a good idea once you're done here to go into your router settings and reserving that IP for your Linux system in the DHCP settings. Check the manual for your specific model router on how to access its settings, typically it can be accessed by opening a browser and typing http:\\192.168.0.1 in the address bar, but your router may be different.
Okay we’re done with our Linux computer for now. Get on over to your Windows PC, open File Explorer, right click on Network and click "Map network drive". Select Z: as the drive letter (you don't want to map the network drive to a letter you could conceivably be using for other purposes) and enter the IP of your Linux machine and location of the share like so: \\"LINUXCOMPUTERLOCALIPADDRESSGOESHERE"\"zpoolnamegoeshere"\. Windows will then ask you for your username and password, enter the ones you set earlier in Samba and you're good. If you've done everything right it should look something like this:
Tumblr media
You can now start moving media over from Windows to the share folder. It's a good idea to have a hard line running to all machines. Moving files over Wi-Fi is going to be tortuously slow, the only thing that’s going to make the transfer time tolerable (hours instead of days) is a solid wired connection between both machines and your router.
Step Six: Setting Up Remote Desktop Access to Your Server
After the server is up and going, you’ll want to be able to access it remotely from Windows. Barring serious maintenance/updates, this is how you'll access it most of the time. On your Linux system open the terminal and enter:
sudo apt install xrdp
Then:
sudo systemctl enable xrdp
Once it's finished installing, open “Settings” on the sidebar and turn off "automatic login" in the User category. Then log out of your account. Attempting to remotely connect to your Linux computer while you’re logged in will result in a black screen!
Now get back on your Windows PC, open search and look for "RDP". A program called "Remote Desktop Connection" should pop up, open this program as an administrator by right-clicking and selecting “run as an administrator”. You’ll be greeted with a window. In the field marked “Computer” type in the IP address of your Linux computer. Press connect and you'll be greeted with a new window and prompt asking for your username and password. Enter your Ubuntu username and password here.
Tumblr media
If everything went right, you’ll be logged into your Linux computer. If the performance is sluggish, adjust the display options. Lowering the resolution and colour depth do a lot to make the interface feel snappier.
Tumblr media
Remote access is how we're going to be using our Linux system from now, barring edge cases like needing to get into the BIOS or upgrading to a new version of Ubuntu. Everything else from performing maintenance like a monthly zpool scrub to checking zpool status and updating software can all be done remotely.
Tumblr media
This is how my server lives its life now, happily humming and chirping away on the floor next to the couch in a corner of the living room.
Step Seven: Plex Media Server/Jellyfin
Okay we’ve got all the ground work finished and our server is almost up and running. We’ve got Ubuntu up and running, our storage array is primed, we’ve set up remote connections and sharing, and maybe we’ve moved over some of favourite movies and TV shows.
Now we need to decide on the media server software to use which will stream our media to us and organize our library. For most people I’d recommend Plex. It just works 99% of the time. That said, Jellyfin has a lot to recommend it by too, even if it is rougher around the edges. Some people run both simultaneously, it’s not that big of an extra strain. I do recommend doing a little bit of your own research into the features each platform offers, but as a quick run down, consider some of the following points:
Plex is closed source and is funded through PlexPass purchases while Jellyfin is open source and entirely user driven. This means a number of things: for one, Plex requires you to purchase a “PlexPass” (purchased as a one time lifetime fee $159.99 CDN/$120 USD or paid for on a monthly or yearly subscription basis) in order to access to certain features, like hardware transcoding (and we want hardware transcoding) or automated intro/credits detection and skipping, Jellyfin offers some of these features for free through plugins. Plex supports a lot more devices than Jellyfin and updates more frequently. That said, Jellyfin's Android and iOS apps are completely free, while the Plex Android and iOS apps must be activated for a one time cost of $6 CDN/$5 USD. But that $6 fee gets you a mobile app that is much more functional and features a unified UI across platforms, the Plex mobile apps are simply a more polished experience. The Jellyfin apps are a bit of a mess and the iOS and Android versions are very different from each other.
Jellyfin’s actual media player is more fully featured than Plex's, but on the other hand Jellyfin's UI, library customization and automatic media tagging really pale in comparison to Plex. Streaming your music library is free through both Jellyfin and Plex, but Plex offers the PlexAmp app for dedicated music streaming which boasts a number of fantastic features, unfortunately some of those fantastic features require a PlexPass. If your internet is down, Jellyfin can still do local streaming, while Plex can fail to play files unless you've got it set up a certain way. Jellyfin has a slew of neat niche features like support for Comic Book libraries with the .cbz/.cbt file types, but then Plex offers some free ad-supported TV and films, they even have a free channel that plays nothing but Classic Doctor Who.
Ultimately it's up to you, I settled on Plex because although some features are pay-walled, it just works. It's more reliable and easier to use, and a one-time fee is much easier to swallow than a subscription. I had a pretty easy time getting my boomer parents and tech illiterate brother introduced to and using Plex and I don't know if I would've had as easy a time doing that with Jellyfin. I do also need to mention that Jellyfin does take a little extra bit of tinkering to get going in Ubuntu, you’ll have to set up process permissions, so if you're more tolerant to tinkering, Jellyfin might be up your alley and I’ll trust that you can follow their installation and configuration guide. For everyone else, I recommend Plex.
So pick your poison: Plex or Jellyfin.
Note: The easiest way to download and install either of these packages in Ubuntu is through Snap Store.
After you've installed one (or both), opening either app will launch a browser window into the browser version of the app allowing you to set all the options server side.
The process of adding creating media libraries is essentially the same in both Plex and Jellyfin. You create a separate libraries for Television, Movies, and Music and add the folders which contain the respective types of media to their respective libraries. The only difficult or time consuming aspect is ensuring that your files and folders follow the appropriate naming conventions:
Plex naming guide for Movies
Plex naming guide for Television
Jellyfin follows the same naming rules but I find their media scanner to be a lot less accurate and forgiving than Plex. Once you've selected the folders to be scanned the service will scan your files, tagging everything and adding metadata. Although I find do find Plex more accurate, it can still erroneously tag some things and you might have to manually clean up some tags in a large library. (When I initially created my library it tagged the 1963-1989 Doctor Who as some Korean soap opera and I needed to manually select the correct match after which everything was tagged normally.) It can also be a bit testy with anime (especially OVAs) be sure to check TVDB to ensure that you have your files and folders structured and named correctly. If something is not showing up at all, double check the name.
Once that's done, organizing and customizing your library is easy. You can set up collections, grouping items together to fit a theme or collect together all the entries in a franchise. You can make playlists, and add custom artwork to entries. It's fun setting up collections with posters to match, there are even several websites dedicated to help you do this like PosterDB. As an example, below are two collections in my library, one collecting all the entries in a franchise, the other follows a theme.
Tumblr media
My Star Trek collection, featuring all eleven television series, and thirteen films.
Tumblr media
My Best of the Worst collection, featuring sixty-nine films previously showcased on RedLetterMedia’s Best of the Worst. They’re all absolutely terrible and I love them.
As for settings, ensure you've got Remote Access going, it should work automatically and be sure to set your upload speed after running a speed test. In the library settings set the database cache to 2000MB to ensure a snappier and more responsive browsing experience, and then check that playback quality is set to original/maximum. If you’re severely bandwidth limited on your upload and have remote users, you might want to limit the remote stream bitrate to something more reasonable, just as a note of comparison Netflix’s 1080p bitrate is approximately 5Mbps, although almost anyone watching through a chromium based browser is streaming at 720p and 3mbps. Other than that you should be good to go. For actually playing your files, there's a Plex app for just about every platform imaginable. I mostly watch television and films on my laptop using the Windows Plex app, but I also use the Android app which can broadcast to the chromecast connected to the TV in the office and the Android TV app for our smart TV. Both are fully functional and easy to navigate, and I can also attest to the OS X version being equally functional.
Part Eight: Finding Media
Now, this is not really a piracy tutorial, there are plenty of those out there. But if you’re unaware, BitTorrent is free and pretty easy to use, just pick a client (qBittorrent is the best) and go find some public trackers to peruse. Just know now that all the best trackers are private and invite only, and that they can be exceptionally difficult to get into. I’m already on a few, and even then, some of the best ones are wholly out of my reach.
If you decide to take the left hand path and turn to Usenet you’ll have to pay. First you’ll need to sign up with a provider like Newshosting or EasyNews for access to Usenet itself, and then to actually find anything you’re going to need to sign up with an indexer like NZBGeek or NZBFinder. There are dozens of indexers, and many people cross post between them, but for more obscure media it’s worth checking multiple. You’ll also need a binary downloader like SABnzbd. That caveat aside, Usenet is faster, bigger, older, less traceable than BitTorrent, and altogether slicker. I honestly prefer it, and I'm kicking myself for taking this long to start using it because I was scared off by the price. I’ve found so many things on Usenet that I had sought in vain elsewhere for years, like a 2010 Italian film about a massacre perpetrated by the SS that played the festival circuit but never received a home media release; some absolute hero uploaded a rip of a festival screener DVD to Usenet. Anyway, figure out the rest of this shit on your own and remember to use protection, get yourself behind a VPN, use a SOCKS5 proxy with your BitTorrent client, etc.
On the legal side of things, if you’re around my age, you (or your family) probably have a big pile of DVDs and Blu-Rays sitting around unwatched and half forgotten. Why not do a bit of amateur media preservation, rip them and upload them to your server for easier access? (Your tools for this are going to be Handbrake to do the ripping and AnyDVD to break any encryption.) I went to the trouble of ripping all my SCTV DVDs (five box sets worth) because none of it is on streaming nor could it be found on any pirate source I tried. I’m glad I did, forty years on it’s still one of the funniest shows to ever be on TV.
Part Nine/Epilogue: Sonarr/Radarr/Lidarr and Overseerr
There are a lot of ways to automate your server for better functionality or to add features you and other users might find useful. Sonarr, Radarr, and Lidarr are a part of a suite of “Servarr” services (there’s also Readarr for books and Whisparr for adult content) that allow you to automate the collection of new episodes of TV shows (Sonarr), new movie releases (Radarr) and music releases (Lidarr). They hook in to your BitTorrent client or Usenet binary newsgroup downloader and crawl your preferred Torrent trackers and Usenet indexers, alerting you to new releases and automatically grabbing them. You can also use these services to manually search for new media, and even replace/upgrade your existing media with better quality uploads. They’re really a little tricky to set up on a bare metal Ubuntu install (ideally you should be running them in Docker Containers), and I won’t be providing a step by step on installing and running them, I’m simply making you aware of their existence.
The other bit of kit I want to make you aware of is Overseerr which is a program that scans your Plex media library and will serve recommendations based on what you like. It also allows you and your users to request specific media. It can even be integrated with Sonarr/Radarr/Lidarr so that fulfilling those requests is fully automated.
And you're done. It really wasn't all that hard. Enjoy your media. Enjoy the control you have over that media. And be safe in the knowledge that no hedgefund CEO motherfucker who hates the movies but who is somehow in control of a major studio will be able to disappear anything in your library as a tax write-off.
1K notes · View notes
brandinotbroke · 6 months ago
Text
Linux distros - what is the difference, which one should I choose?
Caution, VERY long post.
With more and more simmers looking into linux lately, I've been seeing the same questions over and over again: Which distro should I choose? Is distro xyz newbie-friendly? Does this program work on that distro?
So I thought I'd explain the concept of "distros" and clear some of that up.
What are the key differences between distros?
Linux distros are NOT different operating systems (they're all still linux!) and the differences between them aren't actually as big as you think.
Update philosophy: Some distros, like Ubuntu, (supposedly) focus more on stability than being up-to-date. These distros will release one big update once every year or every other year and they are thoroughly tested. However, because the updates are so huge, they inevitably tend to break stuff anyway. On the other end of the spectrum are so-called "rolling release" distros like Arch. They don't do big annual updates, but instead release smaller updates very frequently. They are what's called "bleeding edge" - if there is something new out there, they will be the first ones to get it. This can of course impact stability, but on the other hand, stuff gets improved and fixed very fast. Third, there are also "middle of the road" distros like Fedora, which kind of do... both. Fedora gets big version updates like Ubuntu, but they happen more frequently and are comparably smaller, thus being both stable and reasonably up-to-date.
Package manager: Different distros come with different package managers (APT on ubuntu, DNF on Fedora, etc.). Package managers keep track of all the installed programs on your PC and allow you to update/install/remove programs. You'll often work with the package manager in the terminal: For example, if you want to install lutris on Fedora, you'd type in "sudo dnf install lutris" ("sudo" stands for "super user do", it's the equivalent of administrator rights on Windows). Different package managers come with different pros and cons.
Core utilities and programs: 99% of distros use the same stuff in the background (you don’t even directly interact with it, e.g. background process managing). The 1% that do NOT use the same stuff are obscure distros like VoidLinux, Artix, Alpine, Gentoo, Devuan. If you are not a Linux expert, AVOID THOSE AT ALL COST.
Installation process: Some distros are easier to install than others. Arch is infamous for being a bit difficult to install, but at the same time, its documentation is unparalleled. If you have patience and good reading comprehension, installing arch would literally teach you all you ever need to know about Linux. If you want to go an easier and safer route for now, anything with an installer like Mint or Fedora would suit you better.
Community: Pick a distro with an active community and lots of good documentation! You’ll need help. If you are looking at derivatives (e.g. ZorinOS, which is based on Ubuntu which is based on Debian), ask yourself: Does this derivative give you enough benefits to potentially give up community support of the larger distro it is based on? Usually, the answer is no.
Okay, but what EDITION of this distro should I choose?
"Editions" or “spins” usually refer to variations of the same distro with different desktop environments. The three most common ones you should know are GNOME, KDE Plasma and Cinnamon.
GNOME's UI is more similar to MacOS,  but not exactly the same.
KDE Plasma looks and feels a lot like Windows' UI, but with more customization options.
Cinnamon is also pretty windows-y, but more restricted in terms of customization and generally deemed to be "stuck in 2010". 
Mint vs. Pop!_OS vs. Fedora
Currently, the most popular distros within the Sims community seem to be Mint and Fedora (and Pop!_OS to some extent). They are praised for being "beginner friendly". So what's the difference between them?
Both Mint and Pop!_OS are based on Ubuntu, whereas Fedora is a "standalone" upstream distro, meaning it is not based on another distro.
Personally, I recommend Fedora over Mint and Pop!_OS for several reasons. To name only a few:
I mentioned above that Ubuntu's update philosophy tends to break things once a big update rolls around every two years. Since both Mint and Pop!_OS are based on Ubuntu, they are also affected by this.
Ubuntu, Mint and Pop!_OS like to modify their stuff regularly for theming/branding purposes, but this ALSO tends to break things. It is apparently so bad that there is an initiative to stop this.
Pop!_OS uses the GNOME desktop environment, which I would not recommend if you are switching from Windows. Mint offers Cinnamon, which is visually and technically outdated (they use the x11 windowing system standard from 1984), but still beloved by a lot of people. Fedora offers the more modern KDE Plasma.
Personal observation: Most simmers I've encountered who had severe issues with setting up Linux went with an Ubuntu-based distro. There's just something about it that's fucked up, man.
And this doesn't even get into the whole Snaps vs. Flatpak controvery, but I will skip this for brevity.
Does SimPE (or any other program) work on this distro?
If it works on Fedora, then it works on Mint/Ubuntu/Arch/etc., and vice versa. This is all just a question of having the necessary dependencies installed and installing the program itself properly. Some distros may have certain prerequisites pre-installed, while others don't, but you can always just install those yourself. Like I said, different distros are NOT different operating systems. It's all still Linux and you can ultimately customize it however you want.
In short: Yeah, all Sims 2-related programs work. Yes, ReShade too. It ultimately doesn't really matter what distro you use as long as it is not part of the obscure 1% I mentioned above.
A little piece of advice
Whatever distro you end up choosing: get used to googling stuff and practice reading comprehension! There are numerous forums, discord servers and subreddits where you can ask people for help. Generally speaking, the linux community is very open to helping newbies. HOWEVER, they are not as tolerant to nagging and laziness as the Sims community tends to be. Show initiative, use google search & common sense, try things out before screaming for help and be detailed and respectful when explaining your problems. They appreciate that. Also, use the arch wiki even if you do not use Arch Linux – most of it is applicable to other distros as well.
123 notes · View notes
ms-demeanor · 1 year ago
Text
When I'm reccing linux to non-linux users, part of the reason that I say "install it on an old computer" is that one of the things that people worry about when experimenting with new operating systems is that they'll break the computer beyond repair. Which isn't *likely*, but which is a non-issue when you're talking about a computer that was otherwise just going to be discarded.
I'm leery of recommending dual-booting or booting from a thumb drive to absolute newbies because they're hesitant to experiment on their daily-use computers (with good reason! They've been told not to click on things they don't understand and not to get out of their depth their whole lives so it's very difficult to try to suppress those instincts on the computer you use for school, even if it isn't the operating system you use for school).
So putting it on an old computer is like a free pass to computer class. You can't break what was already broken, so they feel more free to try different stuff.
And, like, I totally get where people are coming from when they say to run a VM, but I'm largely talking about users who aren't aware that they can have multiple profiles on their computer, or who have trouble switching between profiles on a shared computer.
I also see people saying "installing linux isn't any harder than installing windows" and A) that's going to depend on a LOT of variables and B) I don't know if you know this but the reason that most people don't buy bare metal PCs is because they don't actually know how to install an operating system. There are a ton of people in the world who I'd trust to assemble a gaming rig physically but who I think would really struggle with getting it to go from a mass of connected parts in a case to a computer that is running software.
374 notes · View notes
verynerdyelaine · 1 year ago
Text
Window Managers are cool
So, I've been using Unix based Operating Systems (MacOS and Linux) for a while now but when I was just starting to use Linux there was a term that a lot of people were using and that was a "Tiling Window Manager".
What is a Tiling Window Manager?
Well, A Tiling window manager is a window manager that organizes windows into grids of tiles and not promote the overlapping of windows.
Why do you need it?
Tiling Window Managers are amazing at organizing windows and keeping in track with whatever tasks that are in-front of your face. Tiling Window Managers are also keyboard centric in which you can switch between windows with just your keyboard and as I am a Neovim user i use HJKL to breeze through the windows. Tiling Window Managers also have the feature of workspaces (desktops on MacOS) in which you can designate whatever stuff you have into different workspaces.
What Tiling Window Managers do I use?
On, Linux i use dwm and on MacOS i use yabai. dwm is an amazing Window Manager due to it being highly configurable, minimal and yet simple which fits my cozy zone. yabai is what i use for MacOS due to it being 1 of 2 Tiling Window Managers on MacOS (the other being Amethyst) but yabai is much more powerful and more configurable.
Final Thoughts
Tiling Window Managers are cool and you should give them a try :3
165 notes · View notes
ranidspace · 10 months ago
Text
we're a year out from windows 10 being considered End Of Life, on October 14 2025. it will no longer recieve updates, including security updates.
Security updates are INSANELY important these days, it feels like some insane security flaw gets found out and promptly fixed every month these days. running an unpatched version of an operating system puts a target on you and puts your computer and your home network at risk. It is rare to be attacked in that way, yes, but don't take the risk. Your two options:
Windows 11
The issue with windows 11 is not that it's bad, it's just that it adds and changes a bunch of shit for no reason, and for that i recommend installing Winaero Tweaker. This is a program that changes registries and settings on your computer to disable all the telemetry, remove the dumbass microsoft copilot and cortana shit, bring back the old right click menu, the old taskbar, and a bunch more options. It's available for Windows 10 as well, if you're not on windows 11 yet, please check out this program anyway, there's plenty of things you may want to change. It is one of the first things i always install on a new windows computer.
Once you have customized it a bit, there really isnt much of a difference between windows 10 and 11. they just added more bullshit, which you can mostly disable.
Linux
i would deal with fucked up drivers and lightly buggy programs if i never had to deal with windows again.
If you just use your computer to browse the web and manage files, talk to people on discord and shit, linux works perfectly. You will never have to look into a command prompt if you don't want to.
If you do work on it, you may miss some programs, but basically just microsoft office and adobe suite. Office is basically covered by LibreOffice (and works on windows too if you wanna try it out), but it's a bit harder with adobe suite programs. You wanna look at individual programs which you use to see what works for you.
If you play games, it depends. With the success of the steam deck, more and more games are working towards compatibility on linux. Even then, out of the top 1000 games on steam, 85% are compatible with linux, with only 4% straight up refusing. Minecraft works good with prism launcher (again, use this even if you're on windows) roblox works with sober(idk what other non-steam games there are), emulation works amazingly, theres nvidia drivers for it, it's good.
Like theres a bit more setup and some more troubleshooting needed when something goes wrong, but it is so fucking nice to just not have to deal with Windows Bullshit.
I recommend Kubuntu, though i've heard a lot of support for Linux Mint. I'd be happy to answer any other linux questions lol
48 notes · View notes
konqi-official · 9 days ago
Text
One year of using Linux: Some personal thoughts/ramblings
Okay, more like 1.5 years based on my chat history with some of my friends, but I started using Linux as my main operating system sometime after Microsoft announced EOL for Windows 10.
Although I had some prior experience with Linux Mint, I began with Kubuntu as my "daily driver" since a friend of mine spoke highly of it. I am a KDE Plasma lover and Ubuntu is already a highly popular (and supported) Linux variant.
I used it for an entire year and enjoyed it quite a bit! I was able to play every game I had on my original OS, and some of them ran a bit faster than before. I like that updating my system was consistent and reliable, even if I had to do it more often than I did back on Windows (it's also easier to update my machine knowing that it won't randomly make it worse like Windows does).
A month ago, I got a new computer (technically used, but better specs than the old one) and decided to try Manjaro this time around. I was getting a bit tired of Snap and the way that most programs always felt a bit outdated, but I was scared of going all-in on Arch, so I thought Manjaro would be a good stepping stone. It has it's own variants of issues, but I haven't had too much of a problem with it. I still gotta move all my files, though...
Anyways, some thoughts and feelings in bulleted list form:
Krita has some weird quirks on my machine, even when I use the official AppImage from the website. On a multi-monitor setup, the menus for resizing the canvas will appear completely offscreen and require me to use the Super+Arrow Key command to snap it back onto my monitor. Also, the canvas tends to behave weird when alternating between pen and mouse input (i.e. the cursor preview not moving, or the selection commands not functioning until the mouse is moved off of and back onto the canvas). It doesn't stop me from using the program, but it does get a bit annoying.
Spectacle my beloved. Such a good screenshot tool and I like that I can tweak the screenshot more easily compared to the Snipping Tool on Windows 10.
On Kubuntu, it was kinda frustrating to have Firefox constantly be outdated due to using the Snap version. This was part of why I decided to try out Manjaro.
I love that Manjaro preinstalled Yakuake and a couple of other programs that I often used back on Kubuntu. They know what we want <3.
Manjaro doesn't play nicely with turning the screen off when the computer is idling. The screen either dims but doesn't turn off (leaving only the cursor visible), or it does turn off but crashes the KDE Plasma shell, requiring me to reboot it in the terminal. I'm not sure how to fix this, but my system does have an update to run this morning. Hoping it fixes that 🤞.
Troubleshooting issues feels a bit hard when problems and solutions can be scattered across multiple places. This is an issue I've noticed with several open-source projects, where something can be spread across a forum, an issues page on a repository, and a wiki. I wish there was an easier way to search for and ask for help, so that there's less duplicate questions.
Overall, Linux has been a very stable and enjoyable experience to use, and less unwieldy than what I was anticipating. Even then, I don't think I'm quite ready to recommend it to people who aren't familiar with computers and using a command line. I hope that more people adopt Linux, even if it's because of how Windows is... changing.
So yeah! Pretty nice, pretty customizable. Sometimes has issues but I find workarounds most of the time. I hope that the community around open source software can grow and improve, especially hoping to have a wider range of people with different skill levels and walks of life.
9 notes · View notes
netscapenavigator-official · 10 months ago
Text
I recently discovered another oddity in the Unicode standards, so I'm gonna run a test. I'm fairly certain that this is operating-system dependant. On Linux (Zorin OS), these three characters are all different, with only one being an emoji. On iOS, however, one is broken and the other two are the same emoji. I'm assuming the same will happen on macOS, but I have no idea about Android and Windows. So, here they are:
Character #1: 🖶
Character #2: 🖨️
Character #3: 🖨
To see how these characters look on Zorin OS (Linux), check below this:
Tumblr media
Character #1 is Unicode character U+1F5B6
Character #2 is Unicode character U+1F5A8
Character #3 is Unicode character U+1F5A8
Yes, both #2 and #3 share the same Unicode value. That might explain why some operating systems don't differentiate them. However, it's still odd that some operating systems can tell the difference between the Emoji U+1F5A8 (Character #2) and the plain Unicode character U+1F5A8 (Character #3). I wonder why that is.
16 notes · View notes
flat-assembler · 3 months ago
Text
Some information on the "Assemblies".
You've probably seen me mention FASM. Or not. Regardless, it's an important name.
FASM stands for flat assembler (intentionally with lowercase letters. idk why).
Likewise, other Assemblers exist. Popular onces also include NASM, YASM, and MASM. Feel free to look these up on your own time.
We stick with FASM because its output is generally faster and takes up less space than the other Assemblers.
Now, to explain the Assemblers. Each Assembler uses its own syntax to compile inputted code into the result—usually an executable. Assembly code written for FASM generally does not work with the other Assemblers, and likewise the rest are generally not interchangeable.
Thus, in a sense, it may feel like there are multiple Assemblies. Now, to say this would not be inaccurate, but it's also not correct in the way you would think.
Like I said, the differences between the Assemblers listed above are in syntax. But ultimately, all of them follow the same Assembly language.
This is sometimes called x86 Assembly when referring to 32-bit compatable code, but for our purposes, we call it x86-64 Assembly because we want to utilize the 64-bit registers that are present in nearly every modern personal computer.
x86-64 Assembly is only one of many Assembly languages. For the common developer following along however, it is not necessary to cover any other form.
In addition to the type of Assembly language and the Assembler's syntax, another thing to take into consideration is the operating system calling conventions.
There are only two conventions I consider important in the context of FASM: System V ABI, and Windows x64 ABI.
System V ABI is the convention I will almost always refer to on this blog. It covers both Linux and macOS (with minimal nuanced differences between the two).
As the name implies, Windows x64 ABI is for Windows. However, Windows is more confusing because, unlike System V ABI which remains constant on all versions of Linux, Windows x64 ABI will change between major Windows versions, such as Windows 10 to Windows 11.
Sorry, that's an information overload, but if nothing else, remember this: FASM uses x86-64 Assembly, and this blog will cover Linux (System V ABI) convention for FASM.
6 notes · View notes
promodispenser · 10 months ago
Text
Tumblr media
Leveraging XML Data Interface for IPTV EPG
This blog explores the significance of optimizing the XML Data Interface and XMLTV schedule EPG for IPTV. It emphasizes the importance of EPG in IPTV, preparation steps, installation, configuration, file updates, customization, error handling, and advanced tips.
The focus is on enhancing user experience, content delivery, and securing IPTV setups. The comprehensive guide aims to empower IPTV providers and tech enthusiasts to leverage the full potential of XMLTV and EPG technologies.
1. Overview of the Context:
The context focuses on the significance of optimizing the XML Data Interface and leveraging the latest XMLTV schedule EPG (Electronic Program Guide) for IPTV (Internet Protocol Television) providers. L&E Solutions emphasizes the importance of enhancing user experience and content delivery by effectively managing and distributing EPG information.
This guide delves into detailed steps on installing and configuring XMLTV to work with IPTV, automating XMLTV file updates, customizing EPG data, resolving common errors, and deploying advanced tips and tricks to maximize the utility of the system.
2. Key Themes and Details:
The Importance of EPG in IPTV: The EPG plays a vital role in enhancing viewer experience by providing a comprehensive overview of available content and facilitating easy navigation through channels and programs. It allows users to plan their viewing by showing detailed schedules of upcoming shows, episode descriptions, and broadcasting times.
Preparation: Gathering Necessary Resources: The article highlights the importance of gathering required software and hardware, such as XMLTV software, EPG management tools, reliable computer, internet connection, and additional utilities to ensure smooth setup and operation of XMLTV for IPTV.
Installing XMLTV: Detailed step-by-step instructions are provided for installing XMLTV on different operating systems, including Windows, Mac OS X, and Linux (Debian-based systems), ensuring efficient management and utilization of TV listings for IPTV setups.
Configuring XMLTV to Work with IPTV: The article emphasizes the correct configuration of M3U links and EPG URLs to seamlessly integrate XMLTV with IPTV systems, providing accurate and timely broadcasting information.
3. Customization and Automation:
Automating XMLTV File Updates: The importance of automating XMLTV file updates for maintaining an updated EPG is highlighted, with detailed instructions on using cron jobs and scheduled tasks.
Customizing Your EPG Data: The article explores advanced XMLTV configuration options and leveraging third-party services for enhanced EPG data to improve the viewer's experience.
Handling and Resolving Errors: Common issues related to XMLTV and IPTV systems are discussed, along with their solutions, and methods for debugging XMLTV output are outlined.
Advanced Tips and Tricks: The article provides advanced tips and tricks for optimizing EPG performance and securing IPTV setups, such as leveraging caching mechanisms, utilizing efficient data parsing tools, and securing authentication methods.
The conclusion emphasizes the pivotal enhancement of IPTV services through the synergy between the XML Data Interface and XMLTV Guide EPG, offering a robust framework for delivering engaging and easily accessible content. It also encourages continual enrichment of knowledge and utilization of innovative tools to stay at the forefront of IPTV technology.
3. Language and Structure:
The article is written in English and follows a structured approach, providing detailed explanations, step-by-step instructions, and actionable insights to guide IPTV providers, developers, and tech enthusiasts in leveraging the full potential of XMLTV and EPG technologies.
The conclusion emphasizes the pivotal role of the XML Data Interface and XMLTV Guide EPG in enhancing IPTV services to find more information and innovative tools. It serves as a call to action for IPTV providers, developers, and enthusiasts to explore the sophisticated capabilities of XMLTV and EPG technologies for delivering unparalleled content viewing experiences.
youtube
7 notes · View notes
digitaldetoxworld · 5 months ago
Text
Building Your Own Operating System: A Beginner’s Guide
An operating system (OS) is an essential component of computer systems, serving as an interface between hardware and software. It manages system resources, provides services to users and applications, and ensures efficient execution of processes. Without an OS, users would have to manually manage hardware resources, making computing impractical for everyday use.
Tumblr media
Lightweight operating system for old laptops
Functions of an Operating System
Operating systems perform several crucial functions to maintain system stability and usability. These functions include:
1. Process Management
 The OS allocates resources to processes and ensures fair execution while preventing conflicts. It employs algorithms like First-Come-First-Serve (FCFS), Round Robin, and Shortest Job Next (SJN) to optimize CPU utilization and maintain system responsiveness.
2. Memory Management
The OS tracks memory usage and prevents memory leaks by implementing techniques such as paging, segmentation, and virtual memory. These mechanisms enable multitasking and improve overall system performance.
3. File System Management
It provides mechanisms for reading, writing, and deleting files while maintaining security through permissions and access control. File systems such as NTFS, FAT32, and ext4 are widely used across different operating systems.
4. Device Management
 The OS provides device drivers to facilitate interaction with hardware components like printers, keyboards, and network adapters. It ensures smooth data exchange and resource allocation for input/output (I/O) operations.
5. Security and Access Control
 It enforces authentication, authorization, and encryption mechanisms to protect user data and system integrity. Modern OSs incorporate features like firewalls, anti-malware tools, and secure boot processes to prevent unauthorized access and cyber threats.
6. User Interface
 CLI-based systems, such as Linux terminals, provide direct access to system commands, while GUI-based systems, such as Windows and macOS, offer intuitive navigation through icons and menus.
Types of Operating Systems
Operating systems come in various forms, each designed to cater to specific computing needs. Some common types include:
1. Batch Operating System
These systems were widely used in early computing environments for tasks like payroll processing and scientific computations.
2. Multi-User Operating System
 It ensures fair resource allocation and prevents conflicts between users. Examples include UNIX and Windows Server.
3. Real-Time Operating System (RTOS)
RTOS is designed for time-sensitive applications, where processing must occur within strict deadlines. It is used in embedded systems, medical devices, and industrial automation. Examples include VxWorks and FreeRTOS.
4  Mobile Operating System
Mobile OSs are tailored for smartphones and tablets, offering touchscreen interfaces and app ecosystems. 
5  Distributed Operating System
Distributed OS manages multiple computers as a single system, enabling resource sharing and parallel processing. It is used in cloud computing and supercomputing environments. Examples include Google’s Fuchsia and Amoeba.
Popular Operating Systems
Several operating systems dominate the computing landscape, each catering to specific user needs and hardware platforms.
1. Microsoft Windows
 It is popular among home users, businesses, and gamers. Windows 10 and 11 are the latest versions, offering improved performance, security, and compatibility.
2. macOS
macOS is Apple’s proprietary OS designed for Mac computers. It provides a seamless experience with Apple hardware and software, featuring robust security and high-end multimedia capabilities.
3. Linux
Linux is an open-source OS favored by developers, system administrators, and security professionals. It offers various distributions, including Ubuntu, Fedora, and Debian, each catering to different user preferences.
4. Android
It is based on the Linux kernel and supports a vast ecosystem of applications.
5. iOS
iOS is Apple’s mobile OS, known for its smooth performance, security, and exclusive app ecosystem. It powers iPhones and iPads, offering seamless integration with other Apple devices.
Future of Operating Systems
The future of operating systems is shaped by emerging technologies such as artificial intelligence (AI), cloud computing, and edge computing. Some key trends include:
1. AI-Driven OS Enhancements
AI-powered features, such as voice assistants and predictive automation, are becoming integral to modern OSs. AI helps optimize performance, enhance security, and personalize user experiences.
2. Cloud-Based Operating Systems
Cloud OSs enable users to access applications and data remotely. Chrome OS is an example of a cloud-centric OS that relies on internet connectivity for most functions.
3. Edge Computing Integration
With the rise of IoT devices, edge computing is gaining importance. Future OSs will focus on decentralized computing, reducing latency and improving real-time processing.
4. Increased Focus on Security
Cyber threats continue to evolve, prompting OS developers to implement advanced security measures such as zero-trust architectures, multi-factor authentication, and blockchain-based security.
3 notes · View notes
minimalsizeconspiracy · 5 months ago
Text
No-Google (fan)fic writing, Part 3: LaTeχ
Storytime
Just like I used Zettelkasten for fic parallel to Word for work for a long time, I used LaTeX alongside Zettelkasten for a few years. The reason why I made the switch to LaTeX in the first place was precisely because I’d been forced to use Word at work, and Word is just about the shittiest application you could possibly choose if you have to make text look pretty. As in, print-worthy pretty, not just “this assignment needs to look somewhat good so my professor doesn’t grade me down”.
So I badgered an acquaintance to show me LaTeX, which he did, which is when I started down that road – that I’m still on, although I am fairly certain it leads to hell. There were a number of reasons why I started using LaTeX for writing fanfic as well:
I ditched Dropbox for GIT, which is way better in terms of version control and allows you to directly compare changes between plain text files. With Zettelkasten’s bespoke .zkn3 file format, the direct comparison unfortunately doesn’t work because it’s not plain text, and I became increasingly frustrated with that.
I got into Raspberry Pis, and while it is possible to work with Zettelkasten on the small screens, even that simple interface became a bit much for the screen size.
I fell into the Transformers fandom with its plethora of canon and fanon terms for body parts, time units and even different curse words.
Boiling all of that down, I made the decision to switch to a system that would allow me to write plain text at all times because plain text is great for direct comparisons of files, for working on your stories regardless of which operating system your computer runs on – and because LaTeX has an amazing package called “glossaries” that I’ll talk about later on.
Word/Writer/Google docs versus LaTeX
Hoo boy, where to even start! Because, you see, LaTeX is NOT “What You Get Is What You See”. LaTeX is “What You Get Is What You Want (but that also means that while you’re writing your document, it looks nothing like the finished version will look)”.
Let me be plain and clear from the start: If you’re looking into an easy and convenient replacement for Word/Writer/Google docs, I can almost assure you that LaTeX is not what you’re looking for, at least not plain LaTeX. Learning LaTeX requires you to completely rethink how you approach text, because
where in Word, you’ll have boldface and italics and a mixture of both,
in LaTeX, you must write \textbf{boldface} and \textit{italics} and \textbf{\textit{a mixture of both}} and put \chapter{around every single one of your chapter headings} and never, ever forget to close a curly bracket or you’ll (temporarily) break your document.
It ain’t for the faint of heart or those unwilling to learn how to write plain text with code that is actually instructions to your computer on what you would like pieces of your text to look like in your output file.
And for 99.9% of stories, LaTeX is completely overpowered. Seriously.
But I love LaTeX and use it for writing fanfic, so I’ll include it here.
Cost
On the pro side, LaTeX is free. On Windows, you can either install MiKTeX or TeX Live, on Linux, only the latter.
+1 for being free. Just make sure you have enough bandwidth and time when you install, because it’ll take time. Hours, if your computer is old or you have little bandwidth.
Interface
Here comes the first catch:
You will almost never interact directly with LaTeX, especially if you’re new to it. Becaus LaTeX runs in the background and you need an extra interface to interact with it, unless you’re comfortable using the command line.
Fortunately, there are very good LaTeX editors: TeXstudio and TeXMaker are probably the most popular, and either is good and free. Or you can use any plain text editor, really: Notepad++, KATE, whatever Mac has.
Which I sort of want to give +1 for, because it’s not difficult to find a good LaTeX editor, buuut you actually have to download and install an extra editor to use it.
File formats
Still, there is the +1 I’ll give it for being plain text. You can open a LaTeX document in any editor you like and you’ll be able to read the file contents. The official file extension is .tex, but it’s basically the same as opening a .txt-file.
That is actually great. Genuinely, really great, because regardless of which computer you’re using, every computer, any operating system will come with an editor that can open .tex-files.
Even better, if your documents aren’t too complicated, they can be exported into HTML, which is what I usually do. Write story in LaTeX, export to HTML via make4ht, then copypaste into the AO3 HTML or Rich Text editor.
But the main output format for LaTeX is actually PDF. To use make4ht, you need to use the command line, so it’s actually a bit more complicated than with Zettelkasten or LibreOffice Writer to get your story out of LaTeX and into AO3.
Features
As far as features are concerned, there are an insane number of things you can do with LaTeX, layout-wise. I could spend a whole year writing an entry every day on something LaTeX can do and I still wouldn’t have covered even half of it.
LaTeX requires you to have proper document structures, meaning chapters, sections etc. It lets you outcomment text that you want to keep, but don’t want printed in the final version. It lets you load entire chapters or scenes from other .tex-files if you want to keep them separated like the “notes” in Zettelkasten. There’s a package that allows you to include fancy coloured To-Do notes just to annoy your beta with whiny comments about how you’re struggling with a particular scene. (I do that a lot.)
In other words, it is extensive. So I’m going to just focus on what was my main reason to move to LaTeX to write fanfic: the “glossaries” package. Remember what I said above about all the different terms in Transformers? Canon and fanon terminology is, in fact, so diverse and extensive that people write whole lexica for it.
Hands can be servos. Feet can be pedes or peds. And the time units in different continuities (there’s at least seven) make you want to break down, hit the floor with your fists and scream “why?!?” as your neighbours call 112.
The glossaries package in combination with what are called “conditional switches” in LaTeX allows me to create a sort of “dictionary” including all of those different time units while using the same “keyword” for the same concept.
Let’s pick “year” as an example. The entry for that looks approximately like this:
\ifDreamwave \newglossaryentry{year}{name={ano-cycle},description={probably meanting a year in the Dreamwave continuity}} \fi
\ifEnergon \newglossaryentry{year}{name={cycle}, description={year in the Energon continuity}} \fi
\ifIDWTwo \newglossaryentry{year}{name={kilocycle}, description={year in IDW 2019}} \fi
I could go on, but I think the principle has become clear. All of these have in common that I “call” them by entering \gls{year} in the actual text. What the \if does is switch between the different versions, depending on which I enable by adding, for example, \Energontrue.
Every time \gls{year} appears in the text, LaTeX will now automatically replace it with “cycle”, and I can stop trying to remember which word the particular continuity I’m writing for uses.
Does this blow the whole issue of different terminology entirely out of proportion?
Yes, yes, it does. But if you think that will dissuade me, you can’t have met many fanfiction authors. I do not care in the slightest that it is entirely bonkers to go to all that effort just to make sure I’m using the right terminology for the continuity I’m writing in. You’re missing the point.
Syncing
Unless you’re using Overleaf (I’m going to laugh my arse off if any of you tells me you’re using your university-sponsored Overleaf licence to write fanfiction), syncing your .tex-files across machines requires the use of another service – Dropbox, OneDrive, but actually, GIT is the best, either online via GitHub or GitLab or with a USB. I will get to the differences between those services in due time.
Ease of use for Word/Google doc-users
XD
I said it above already, but if you’re coming straight from Word or Google docs to LaTeX, you’re going to have to invest time into understanding how LaTeX works. You’ll have to get used to writing code in your document and being unable to immediately see what your text looks like in the output, unless you use LyX, which is a LaTeX-editor that was built specifically to make it easier for Word-users to switch to using LaTeX. But even so, you’ll need to learn how to structure documents.
If you’re thinking of using LaTeX for other purposes as well – uni, publishing actual books, anything where it’s useful to be able to layout your documents professionally yourself, absolutely. At least give it a try.
In order to just write fanfic? In franchises that haven’t decided to come up with new time units every time they create a new continuity?
It’s probably not worth it. The only reason I’m using it to write fanfic is because I already knew all of that stuff. I didn’t have to invest time in learning LaTeX in the first place, I just started using LaTeX for writing fanfic as well.
Don’t get me wrong. I love LaTeX. It is just a huge time investment if you can’t also use those skills somewhere else, and if it’s the plain text you’re after, the next part will feature Markdown – which has by and large the same benefits as LaTeX, but takes about half an hour to learn.
Read No-Google (fan)fic writing, Part 1: LibreOffice Writer
Read No-Google (fan)fic writing, Part 2: Zettelkasten
Read No-Google (fan)fic writing, Part 4: Markdown
Read No-Google (fan)fic writing, Part 5: Obsidian
5 notes · View notes
linuxtoolsguide · 3 months ago
Text
Installing Kali Linux on a USB Stick: A Step-by-Step Guide
If you want a portable, powerful cybersecurity toolkit you can carry in your pocket, installing Kali Linux on a USB stick is the perfect solution. With Kali on a USB, you can boot into your personalized hacking environment on almost any computer without leaving a trace — making it a favorite setup for ethical hackers, penetration testers, and cybersecurity enthusiasts.
Tumblr media
In this guide, we'll walk you through how to install Kali Linux onto a USB drive — step-by-step — so you can have a portable Kali environment ready wherever you go.
Why Install Kali Linux on a USB?
Before we dive into the steps, here’s why you might want a Kali USB:
Portability: Carry your entire hacking setup with you.
Privacy: No need to install anything on the host machine.
Persistence: Save your settings, files, and tools even after rebooting.
Flexibility: Boot into Kali on any system that allows USB boot.
There are two main ways to use Kali on a USB:
Live USB: Runs Kali temporarily without saving changes after reboot.
Persistent USB: Saves your files and system changes across reboots.
In this article, we’ll focus on setting up a Live USB, and I'll also mention how to add persistence if you want. and if you seek knowledge about kali linux you can visit our website any time
Website Name : Linux Tools Guide
What You’ll Need
✅ A USB drive (at least 8GB; 16GB or more recommended if you want persistence). ✅ Kali Linux ISO file (download it from the official Kali website). ✅ Rufus (for Windows) or Etcher/balenaEtcher (for Mac/Linux/Windows). ✅ A computer that can boot from USB.
Step 1: Download the Kali Linux ISO
Go to the Kali Linux Downloads page and grab the latest version of the ISO. You can choose between the full version or a lightweight version depending on your USB size and system requirements.
Tip: Always verify the checksum of the ISO to ensure it hasn't been tampered with!
Step 2: Insert Your USB Drive
Plug your USB stick into your computer. ⚠️ Warning: Installing Kali onto the USB will erase all existing data on it. Backup anything important first!
Step 3: Create a Bootable Kali Linux USB
Depending on your operating system, the tool you use may vary:
For Windows Users (using Rufus):
Download and open Rufus (Get Rufus here).
Select your USB drive under Device.
Under Boot selection, choose the Kali Linux ISO you downloaded.
Keep the Partition scheme as MBR (for BIOS) or GPT (for UEFI) based on your system.
Click Start and wait for the process to complete.
For Mac/Linux Users (using balenaEtcher):
Download and open balenaEtcher (Get Etcher here).
Select the Kali ISO.
Select the USB drive.
Click Flash and wait until it's done.
That's it! You now have a Live Kali USB ready.
Step 4: Boot Kali Linux from the USB
Restart your computer with the USB plugged in.
Enter the BIOS/UEFI settings (usually by pressing a key like F12, Esc, Del, or F2 right after starting the computer).
Change the boot order to boot from the USB first.
Save changes and reboot.
You should now see the Kali Linux boot menu! Select "Live (amd64)" to start Kali without installation.
(Optional) Step 5: Adding Persistence
Persistence allows you to save files, system changes, or even installed tools across reboots — super useful for real-world usage.
Setting up persistence requires creating an extra partition on the USB and tweaking a few settings. Here's a quick overview:
Create a second partition labeled persistence.
Format it as ext4.
Mount it and create a file /persistence.conf inside it with the content: cppCopyEdit/ union
When booting Kali, choose the "Live USB Persistence" option.
Persistence is a little more technical but absolutely worth it if you want a real working Kali USB system!
Troubleshooting Common Issues
USB not showing up in boot menu?
Make sure Secure Boot is disabled in BIOS.
Ensure the USB was properly written (try writing it again if necessary).
Kali not booting properly?
Verify the ISO file integrity.
Try a different USB port (preferably USB 2.0 instead of 3.0 sometimes).
Persistence not working?
Double-check the /persistence.conf file and make sure it's correctly placed.
Conclusion
Installing Kali Linux onto a USB stick is one of the smartest ways to carry a secure, full-featured hacking lab with you anywhere. Whether you’re practicing ethical hacking, doing security audits, or just exploring the world of cybersecurity, a Kali USB drive gives you power, portability, and flexibility all at once.
Once you’re set up, the possibilities are endless — happy hacking! 🔥
2 notes · View notes
skytechacademy · 1 year ago
Text
Can you explain the differences between A+, Network+, and Security+ certifications from CompTIA? Which certification is considered more valuable and why?
Certainly! CompTIA offers several certifications that are widely recognized in the IT industry. A+, Network+, and Security+ are three of the most popular certifications, each focusing on different areas of IT. Here's a breakdown of each:
A+ Certification:
Focus: This certification is geared towards entry-level IT professionals and covers foundational skills in IT hardware, software, networking, and troubleshooting.
Topics: A+ covers areas such as PC hardware, operating systems (Windows, Linux, macOS), networking, mobile devices, security, and troubleshooting.
Job Roles: A+ certification holders often work in roles such as technical support specialists, help desk technicians, and field service technicians.
Value: A+ is valuable for individuals starting their IT careers as it provides a solid foundation of IT knowledge and skills. It's often a prerequisite for more advanced certifications.
Network+ Certification:
Focus: Network+ focuses specifically on networking concepts and skills required for IT professionals working with networks, both wired and wireless.
Topics: Network+ covers areas such as network technologies, installation and configuration, media and topologies, management, security, and troubleshooting.
Job Roles: Network+ certification holders typically work in roles such as network administrators, network technicians, and systems engineers.
Value: Network+ is valuable for individuals seeking to specialize in networking. It provides a comprehensive understanding of networking fundamentals and is recognized by employers as validation of networking knowledge and skills.
Security+ Certification:
Focus: Security+ is focused on cybersecurity concepts and skills, covering best practices in securing networks, systems, and applications.
Topics: Security+ covers areas such as network security, compliance and operational security, threats and vulnerabilities, application, data, and host security, access control, identity management, and cryptography.
Job Roles: Security+ certification holders often work in roles such as security analysts, security specialists, security administrators, and network security engineers.
Value: Security+ is highly valuable in today's cybersecurity landscape. It demonstrates proficiency in cybersecurity principles and practices and is often required or recommended for cybersecurity-related roles.
In terms of which certification is considered more valuable, it largely depends on your career goals and the specific job role you're targeting. However, comptia Security+ certification is often regarded as more valuable in terms of salary and job prospects due to the increasing demand for cybersecurity professionals and the critical importance of cybersecurity in modern IT environments. That said, all three certifications have their own merit and can be valuable depending on your career path and interests.
7 notes · View notes
fruttymoment · 2 years ago
Note
Whoa linux user
Do you have a guide on how to switch to it? I have zero coding knowledge (i think that's required) and I trust you with my life
I perfectly understand the "linux is scary and requires very big brain and coding its too hard to use!" thought coming from a Windows/Mac guy, i really do! But in the end, a linux distro is just a computer kernel that also has a desktop environment and just does what you want it to do like an operating system
Coding on linux is not required. Linux has so many distros at this point that designed to be beginner friendly, requiring "no knowledge but TO gain knowledge while using it"
The linux terminal is the thing that scares most of the users, but trust me once you get used to it you'll realize how efficent it is to operate your computer and do certain tasks from THE terminal instead! In the end, the cold looking white text with black background terminals are the REAL face of computers. Desktop environment is made so EVERYONE can use computers!
The terminal of gnu/linux uses the bash language. In a nutshell, it is pretty easy to learn basic commands actually!
Super beginner friendly linux distros are designed for people (YOUU) who has no experience whatsoever with linux! They are designed and engineered so you dont have to use the terminal much! For example, Linux Mint is the best distro you should start with. It looks and feels like Windows, even! And Mint does not require much terminal usage. That is also their mission, to make an linux distro friendly enough that no terminal usage is needed!
As easy as this sounds, i actually do not recommend staying this far away from the linux terminal. Please start with Linux Mint if you gonna, its just the best for beginners, but also please dont avoid the terminal much! The linux terminal is important to learn because it also teaches you how a computer really works, and certain operations are much more efficent to do via terminal anyway!
Push yourself to interact with the terminal, even. Learn very basic commands like "shutdown now" , and the "sudo" privilege and how it works (linux always asks your password while doing stuff and you also cant do muc without the sudo privileges!)
"sudo" is the command that gives you the REAL admin privileges to do ANYTHING. With your password and sudo, you can even delete your bootloader lol. Linux wont stop you
This means to be extremely careful while using sudo, though! You can do ANYTHING with sudo privileges, and that also contains accidently trashing your computer! Unlike Windows, that doesnt even let you uninstall Edge, linux has no boundries. Its like "we are gonna assume you know what you are doing."
Of course, friendly distros DO warn you on certain stuff, so dont worry too much!
So ye. U can use linux with no coding knowledge, but i dont recommend staying like that. After starting to use linux, you GOTTA let it teach you stuff!
And to the "switching to linux for the first time" part;
I recommend not deleting your main Windows, actually. For first time using linux i heavily recommend the "dual booting" , which simply means booting operating systems more than one in an computer! You can use BOTH linux and windows in thay way! Although, you need to shudown your pc if you want to switch between them and do it in the booting menu
This is because if something goes wrong, or you get very confused, just let Windows be ready in there. Only make the switch the moment when you feel you can operate linux with no issues and easily!
Dual booting basically slices your disk and creates partitions for operating systems. For example if you have an 512GB SSD, in dual booting you can slice it and make Windows use 256GB and Linux use 256GB too! Ofc u can change the numbers here (linux mimt will help u,.)
Before completely switching to linux; be aware that its a bit of a different world. Sure, very popular softwares exists in linux too but some softwares may not suppor linux. Adobe products dont support linux, for example! You can of course just emulate them with Wine software heh, but that would be a bit of a work!
Another problem will be certain online games. Online games does not like linux becuse how easy it is to manipulate the system, so they just either dont run on linux or they ban/kick you when you try to emulate it on linux
An example is Valorant. Valorant does not tun on linux!
And any online game that has a cheap anticheat system will be a problem!
With that being said, linux now supports every single game from Steam, with the Proton software. Just be careful about them online ones! If an online game natively supports linux (TF2, for example!) , it wont be a problem! Check the steam game's info thingy for it!
Oh and official Minecraft works in linux lol
Discord, Spotify etc. popular softwares also work on linux!
Linux in fact has an "app manager" software in their distros, making you install stuff with no terminal whatsoever! Think like Google Play Store but on le pc!
Anyways hehe thats it fo me bascallya! If u wanna switch to linux with no experience, start with the Linux Mint distro i say and explore it well! Tamper everything before fully installing it, dont worry about it! Linux is free. Linux does not care if you want to kill the entire system, even. Linux is freedom
Also please research the dual booting! You'll gonna be needing an 4GB+ USB for it, and a software like Rufus!
The site of Linux Mint has everything you need in detail, step by step ^^ good luck!
26 notes · View notes
a-girl-called-bob · 1 year ago
Text
Tumblr media
I don't want to reply to this on the post it's on, because it'd be getting pretty far away from the original point (that being that chromebooks have actively eroded the technological literacy of large proportions of young people, especially in the US), but I felt enough of a need to respond to these points to make my own post.
Point 1 is... pretty much correct in the context that it's replying to; the Google Problem in this case being the societal impact of Google as a company and how their corporate decisions have shaped the current technological landscape (again, especially in the US). I'd argue it's less like saying Firefox is a good alternative for your dishwasher and more like saying Firefox is a solution for climate change, but whatever, the point's the same. You can't personal choices your way out of systemic issues.
Point 2 is only correct in the most pedantic way; we both know that 'running on a Linux kernel' isn't what we mean when we talk about Linux systems. It's one true definition, but not a functional or useful one. Android and ChromeOS (and to a lesser extent, MacOS, and to an even greater extent, the fucking NES Mini) all share a particular set of characteristics that run counter to the vast majority of FOSS and even Enterprise Linux distributions. Particularly, they're a.) bundled with their hardware, b.) range from mildly annoying to damn near impossible (as well as TOS-breaking) to modify or remove from said hardware, and c.) contain built-in access restrictions that prevent the user from running arbitrary Linux programs. I would consider these systems to all be Linux-derived, but their design philosophies and end goals are fundamentally different from what we usually mean when we talk about 'a Linux system'. Conflating the two is rhetorically counterproductive when you fucking know what we mean.
Point 3 is a significant pet peeve of mine, and the primary reason why I feel the need to actually respond to this even if only on my own blog. "Linux is not a consumer operating system" is such a common refrain, it's practically a meme; yet, I've never seen someone explain why they think that in a way that wasn't based on a 30-year-old conception of what Linux is and does. If you pick up Linux Mint or Ubuntu or, I don't know, KDE Plasma or something, the learning curve for the vast majority of things the average user needs to do is nearly identical to what it would be on Windows. Office software is the same. Media players is the same. Files and folders is the same. Web browsers is the same. GIMP's a little finicky compared to Photoshop but it also didn't cost you anything and there are further alternatives if you look for them. There are a few differences in terms of interface, but if you're choosing between either one to learn for the first time you're using a computer, the difference isn't that large. Granted, you can also do a bunch of stuff with the command line - you could say the same of Powershell, though, and you don't have to use either for most things. Hell, in some respects Windows has been playing catch-up - the Windows Store post-dates graphical software browsers on Linux by at least a decade, maybe more. Finding and installing programs has, quite literally, never been harder on Linux than on Windows - and only recently has Windows caught up. I used Linux as my daily driver for five years before I ever regularly had to open up the terminal (and even then it was only because I started learning Python). I was also seven when I started. If the average teenager these days has worse computer literacy than little seven year old Cam Cade (who had, let me think, just about none to start with), I think we have bigger issues to worry about.
In my opinion, Linux users saying Linux 'isn't for consumers' is an elitist, condescending attitude that's not reflective of the actual experience of using a Linux system. To say so also devalues and trivializes the work put in to projects like Mint and Ubuntu, which are explicitly intended to be seamlessly usable for the vast majority of day-to-day computer tasks.
3 notes · View notes