Tumgik
#remote access services
simpliasstuff · 2 years
Text
0 notes
ivygorgon · 2 months
Text
AN OPEN LETTER to THE U.S. CONGRESS
Fund the Affordable Connectivity Program NOW!
130 so far! Help us get to 250 signers!
I’m a concerned constituent writing to urge you to fund the Affordable Connectivity Program or ACP. Digital connectivity is a basic necessity in our modern world and the internet must be treated as a public utility. We use the internet to apply for jobs, perform our jobs, receive telehealth medical treatment, and pay bills, and students use it to complete homework assignments. But for millions of people in rural and urban areas, and Tribal communities, the internet is a luxury they cannot afford. Failure by Congress to fund this program will force millions of households already on tight budgets to choose between being able to stay online or potentially losing access to this essential service. If Congress doesn’t act fast, funding for the Affordable Connectivity Program will run out and more than 22 million Americans -- 1 in 6 households -- will lose this vital service. The implications of this will be devastating. In 2019, 18% of Native people living on Tribal land had no internet access; 33% relied on cell phone service for the internet; and 39% had spotty or no connection to the internet at home on their smart phone. The ACP has enrolled 320,000 households on Tribal lands -- important progress. The largest percentage gains in broadband access are in rural areas. Nearly half of military families are enrolled in ACP, as are one in four African American and Latino households. Losing access and training on using computers and the internet will have devastating impacts on all these communities as technology becomes increasingly integral to work, education, health, and our everyday lives. Without moves to address tech inequality, low-income communities and communities of color are heading towards an “unemployment abyss.” The Affordable Connectivity Program has broad bipartisan support because it is working. As your constituent, I am urging you to push for renewed funding for the ACP before it runs out in the coming weeks.
▶ Created on April 11 by Jess Craven
📱 Text SIGN PJXULY to 50409
🤯 Liked it? Text FOLLOW JESSCRAVEN101 to 50409
3 notes · View notes
alex51324 · 1 year
Text
PSA:  I’m now back at GHQ, after an extended visit to my dad’s place in the Land of Limited Internet.  If there was anything you would have expected me to reply to in the last couple of weeks, I probably didn’t see it, so feel free to put it in front of my face again.  
6 notes · View notes
fingertipsmp3 · 1 year
Text
I should neverrr have accepted this shift. Literally every problem I have would be irrelevant if I didn’t have to go to work today
#guys it is fucking SNOWING in MARCH. we have got flurries and we have got 2 inches already on the ground#and ya girl works ✨at an extremely remote nature preserve which is accessible only by a winding country road that will PROBABLY NOT BE#GRITTED and also who the fuck is going to visit in this weather?? 90% of the activities you can do there involve BEING OUTSIDE#(the other 10% is gift shop and food; the latter of which i am partly responsible for. but like. realistically does anyone go there for food#it’s more like you’re there anyway and you get hungry so you might as well have a coffee and/or a sandwich. we are not starbucks. no one is#coming to me for a machine cappucino and then just leaving because they got everything they came for. it’s more like you come to see some#wildlife and then you see me in my apron looking bored next to a coffee machine and a display of cakes and you think ‘might as well’#the only people coming here specifically for food and then leaving are the people who buy the too good to go bags#and even THEY usually hang out on the reserve a bit. like. you’re here. might as well go see a gannet or two)#so????? to summarise i don’t even know if we’re open today. nobody tells me anything. plus my shift doesn’t even start until 11:30 anyway#my mom’s friend who lives close by is doing a reccy for me but i can’t imagine she’ll find anything pertinent unless she goes at opening#time; which isn’t for another hour#i’ve formed a plan. if no one calls me by 9:45 (past opening time) i’m going to call them and be like ‘hey i’m not coming in; i can’t#physically get there. my village hasn’t been gritted [true] and is basically an ice rink and i’m worried if i get there i might just be#stuck there [also true]. record it as an unpaid absence if you want because i’m not sick or anything’#i’d literally be amazed if they opened tbh. like we’ll get zero customers. they’d have to pay me ~£50 if i went in and will they even make#£50??? a very good question. PLUS there’s two other people working in the cafe with me. and my manager. that’s like.. a solid £200 of wages#on a day when we’d be unlikely to get enough customers to make £200. no way they’ll open; and if they do they won’t want me to come in#like girl what is the point of me coming in to cover the lunch service if we’re basically not going to DO a lunch service lmao#i shouldn’t have accepted this shift when it was offered to me. i should’ve been like ‘no girl i can’t because i don’t want to ❤️#good luck tho’#anyway. we’ll see what happens i guess#personal
1 note · View note
richardmhicks · 2 years
Text
Always On VPN RADIUS Configuration Missing
Always On VPN RADIUS Configuration Missing
Windows Server Routing and Remote Access Service (RRAS) is a popular choice for administrators deploying Always On VPN. It is easy to configure and scales out easily. Most commonly, RRAS servers are configured to use RADIUS authentication to provide user authentication for Always On VPN client connections. The RADIUS server can be Microsoft Network Policy and Access Server (NPAS, or simply NPS)…
Tumblr media
View On WordPress
2 notes · View notes
myremoteoffice · 5 days
Text
Tumblr media
Best Cloud Hosting Services in UK
In today’s remote work landscape, a reliable cloud hosting providers UK is paramount. Look no further than My Remote Office! Our UK-based cloud hosting plans are designed with remote teams in mind.
We prioritize scalability to ensure seamless access to essential applications and data, regardless of location. Robust security measures keep your company’s information safe, and our dedicated UK-based support team minimizes disruption by readily addressing any technical issues.
Experience the peace of mind and smooth operation that comes with a secure, reliable, and well-supported cloud hosting foundation from My Remote Office.
1 note · View note
Text
0 notes
lionheartlr · 13 days
Text
Discover Bolivia: Your Ultimate Travel Guide
A Glimpse into Bolivia’s Rich History Bolivia, a landlocked country in South America, boasts a diverse and rich history. It was originally inhabited by ancient civilizations, including the Tiwanaku and the Inca Empire. Spanish conquistadors arrived in the 16th century, leading to centuries of colonial rule. Bolivia gained independence in 1825 but has since experienced a turbulent political…
Tumblr media
View On WordPress
#a landlocked country in South America#adventure#africa#and activities#and local markets. Adventure Sports: Mountain biking on the infamous Death Road. Wildlife Watching: Spot exotic animals in the Amazon Basin.#and quinoa. Popular dishes include salteñas (empanadas)#and respect local customs. Accommodation Affordability Bolivia offers a range of accommodation options#and sopa de maní (peanut soup). Cultural events and festivals#and sopa de maní for a taste of traditional Bolivian cuisine. 7. Can I use credit cards in Bolivia? Credit cards are widely accepted in majo#and taxis or ride-sharing services are available in cities. Religion Catholicism is the predominant religion#anticuchos#anticuchos (grilled meat skewers)#are also widely spoken. Embark on your Bolivian adventure with this comprehensive guide and immerse yourself in the rich history#be cautious with your belongings#boasts a diverse and rich history. It was originally inhabited by ancient civilizations#but exercise usual precautions. Avoid walking alone at night#but Indigenous beliefs and practices are also widely observed#but it&039;s advisable to carry cash#but it&039;s best to check specific requirements beforehand. 2. What is the best time to visit Bolivia? The dry season from May to October#but many Indigenous languages#but requirements vary by nationality. US citizens#but take usual precautions against petty crime. Avoid demonstrations#carry cash for remote regions and small transactions. Top Places to Visit 1. Salar de Uyuni The world&039;s largest salt flat offers stunni#challenges like rural access and educational quality persist. Universities in major cities offer higher education opportunities. Visa and En#colonial cities#corn#creating a unique cultural blend. Food and Culture Bolivian cuisine is diverse#destinations#Discover Bolivia: Your Ultimate Travel Guide A Glimpse into Bolivia&039;s Rich History Bolivia#especially during the rainy season when it reflects the sky. 2. La Paz The administrative capital
0 notes
tocourtdisaster · 8 months
Text
The network is down at my job meaning I can’t clock in, can’t open any software, can’t do anything except sit here staring at loading screens until they time out.
I should have just stayed in bed this morning.
1 note · View note
lifestyleblogeruk · 11 months
Text
The Top 5 Signs That Your Security Gate Needs Repairs
Security gates serve as the first line of defense for both residential and commercial properties. Their primary purpose is to keep intruders at bay and ensure the safety of the premises. However, like any other mechanical system, security gates are prone to wear and tear over time. Regular maintenance is essential to keep them functioning optimally, but sometimes, repairs become inevitable.
Tumblr media
In this blog post, we will discuss the top five signs that indicate your security gate requires repairs. Identifying these signs early on can save you from potential security breaches and the inconvenience of unexpected gate malfunctions. If you notice any of the following issues, it's time to consider seeking professional security gate maintenance services.
Unusual Noises and Jerky Movements: One of the most apparent signs of a security gate in distress is strange noises and jerky movements during operation. If your gate screeches, grinds, or makes unusual sounds, it might be due to worn-out components or damaged gears. Ignoring these noises can exacerbate the problem, leading to costly repairs in the long run. Similarly, if the gate's movement becomes erratic or it opens and closes abruptly, there could be an issue with the gate motor or automation system.
2. Slow Response to Remote Control Commands: Does your security gate take longer than usual to respond to remote control commands? Delayed responses or intermittent functioning can be indicative of a faulty gate access control system. It could be a problem with the remote control itself, the receiver, or the gate's electronic components. To ensure reliable access control and prevent potential security risks, contact experts in remote control security gate repairs.
Gate Misalignment and Sagging: Over time, the weight and frequent movements of the gate can cause misalignment or sagging issues. Misaligned gates may have difficulty closing properly, leaving gaps that compromise security. Sagging gates, on the other hand, exert unnecessary strain on the hinges and motor, potentially leading to more severe damages. Residential and commercial security gate services can realign the gate and address any structural issues to restore its proper functionality.
4. Frequent Tripping of Circuit Breakers: An automatic security gate that frequently trips the circuit breakers might be facing power supply problems or an electrical issue. It could result from damaged wiring, faulty components, or security gate motor replacement requirements. Avoid resetting the circuit breakers repeatedly, as it might lead to further complications. Instead, contact professionals in emergency security gate repairs to diagnose and resolve the root cause of the issue.
Visible Signs of Wear and Rust: Regularly inspect your security gate for visible signs of wear, corrosion, or rust. Steel and iron security gates are susceptible to rust, especially in humid or coastal environments. Rust can weaken the structure, affecting the gate's overall integrity. Additionally, worn-out parts, such as hinges, tracks, and rollers, can hinder smooth gate operation. Engage in proactive maintenance, and if needed, seek iron security gate maintenance to address rust and prevent further deterioration.
In conclusion, security gate repairs are not to be taken lightly, as a malfunctioning gate can compromise the safety of your property and its occupants. Regular maintenance can prolong the life of your security gate, but when signs of trouble arise, prompt action is crucial. Whether it's residential or commercial security gate services, emergency repairs, or gate access control fixes, investing in professional assistance ensures a well-functioning and secure gate for your peace of mind. Contact us!
0 notes
flawlessviewmedia · 1 year
Text
Amazon Reveals First Satellite Internet Dishes to Compete with Starlink
Amazon has officially announced its first satellite internet dishes as it looks to compete with Elon Musk’s Starlink in the race to provide internet connectivity to remote and underserved regions around the world. The e-commerce giant has been developing its own satellite internet project, codenamed Project Kuiper, since 2019 and recently revealed its first set of antennas. The announcement comes…
Tumblr media
View On WordPress
0 notes
hivepro · 2 years
Link
A remote code execution (RCE) vulnerability(CVE-2021-22941) affecting Citrix ShareFile Storage Zones Controller, was used by Prophet Spider to attack a Microsoft Internet Information Services (IIS) web server. The attacker took advantage of the flaw to launch a WebShell that allowed the download of further tools.
Prophet Spider also exploits known Log4j vulnerabilities in VMware Horizon (CVE-2021-44228, CVE-2021-45046, CVE-2021-44832). Prophet Spider most typically used encoded PowerShell instructions to download a second-stage payload to the targeted PCs after exploiting the vulnerabilities. The specifics of that payload are determined by the attacker’s motivations and aims, such as crypto mining, ransomware, and extortion.
1 note · View note
richardmhicks · 1 month
Text
Always On VPN May 2024 Security Updates
Once again, Microsoft has released its monthly security updates. For May 2024, there are several vulnerabilities in services related to Always On VPN that administrators will want to pay close attention to. Microsoft has identified known issues in the Routing and Remote Access Service (RRAS) and the Remote Access Connection Manager (RasMan) service for this release cycle. RRAS This month,…
Tumblr media
View On WordPress
0 notes
myremoteoffice · 13 days
Text
Best Cloud Hosting Provider in UK
In today’s dynamic digital landscape, choosing the right cloud hosting provider is paramount for businesses of all sizes. Here at My Remote Office, we understand the importance of reliable, scalable, and secure cloud hosting solutions. That’s why we’ve compiled a comprehensive guide to the top cloud hosting provider UK, empowering you to make an informed decision:
1 note · View note
punisheddonjuan · 4 months
Text
How I ditched streaming services and learned to love Linux: A step-by-step guide to building your very own personal media streaming server (V2.0: REVISED AND EXPANDED EDITION)
This is a revised, corrected and expanded version of my tutorial on setting up a personal media server that previously appeared on my old blog (donjuan-auxenfers). I expect that that post is still making the rounds (hopefully with my addendum on modifying group share permissions in Ubuntu to circumvent 0x8007003B "Unexpected Network Error" messages in Windows when transferring files) but I have no way of checking. Anyway this new revised version of the tutorial corrects one or two small errors I discovered when rereading what I wrote, adds links to all products mentioned and is just more polished generally. I also expanded it a bit, pointing more adventurous users toward programs such as Sonarr/Radarr/Lidarr and Overseerr which can be used for automating user requests and media collection.
So then, what is this tutorial? This is a tutorial on how to build and set up your own personal media server using Ubuntu as an operating system and Plex (or Jellyfin) to not only manage your media,, but to stream that media to your devices both locally at home, and remotely to anywhere in the world where you have an internet connection. This is a tutorial about how building a personal media server and stuffing it full of films, television shows and music that you acquired through indiscriminate and voracious media piracy various legal methods like ripping your own physical media to disk, you’ll be free to completely ditch paid streaming services. No more will you have to pay for Disney+, Netflix, HBOMAX, Hulu, Amazon Prime, Peacock, CBS All Access, Paramount+, Crave or any other streaming service that is not named Criterion Channel (which is actually good). If you want to watch your favourite films and television shows, you’ll have your own custom service that only features things that you want to see, and where you have control over your own files and how they’re delivered to you. And for music fans out there, both Jellyfin and Plex support music streaming, meaning you can even ditch music streaming services. Goodbye Spotify, Youtube Music, Tidal and Apple Music, welcome back unreasonably large MP3 (or FLAC) collections.
On the hardware front, I’m going to offer a few options catered towards differing budgets and media library sizes. The cost of getting a media server up and running using this guide will cost you anywhere from $450 CDN/$325 USD at the entry level to $1500 CDN/$1100 USD at the high end. My own server was priced closer to the higher figure, with much of that cost being hard drives. If that seems excessive, consider for a moment, maybe you have a roommate, a close friend, or a family member who would be willing to chip in a few bucks towards your little project provided they get a share of the bounty. This is how my server was funded. It might also be worth thinking about cost over time, how much you spend yearly on subscriptions vs. a one time cost of setting up a server. Additionally there's just the joy of being able to scream "fuck you" at all those show cancelling, movie deleting, hedge fund vampire CEOs who run the studios through denying them your money. Drive a stake through David Zaslav's heart.
On the software side I will walk you step-by-step through installing Ubuntu as your server's operating system, configuring your storage as a RAIDz array with ZFS, sharing your zpool to Windows with Samba, running a remote connection between your server and your Windows PC, and then a little about started with Plex/Jellyfin. Every terminal command you will need to input will be provided, and I even share a custom #bash script that will make used vs. available drive space on your server display correctly in Windows.
If you have a different preferred flavour of Linux (Arch, Manjaro, Redhat, Fedora, Mint, OpenSUSE, CentOS, Slackware etc. et. al.) and are aching to tell me off for being basic and using Ubuntu, this tutorial is not for you. The sort of person with a preferred Linux distro is the sort of person who can do this sort of thing in their sleep. Also I don't care. This tutorial is intended for the average home computer user. This is also why we’re not using a more exotic home server solution like running everything through Docker Containers and managing it through a dashboard like Homarr or Heimdall. While such solutions are fantastic and can be very easy to maintain once you have it all set up, wrapping your brain around Docker is a whole thing in and of itself. If you do follow this tutorial and had fun putting everything together, then I would encourage you to return in a year’s time, do your research and set up everything with Docker Containers.
Lastly, this is a tutorial aimed at Windows users. Although I was a daily user of OS X for many years (roughly 2008-2023) and I've dabbled quite a bit with various Linux distributions (mostly Ubuntu and Manjaro), my primary OS these days is Windows 11. Many things in this tutorial will still be applicable to Mac users, but others (e.g. setting up shares) you will have to look up for yourself. I doubt it would be difficult to do so.
Nothing in this tutorial will require feats of computing expertise. All you will need is a basic computer literacy (i.e. an understanding of what a filesystem and directory are, and a degree of comfort in the settings menu) and a willingness to learn a thing or two. While this guide may look overwhelming at first glance, it is only because I want to be as thorough as possible. I want you to understand exactly what it is you're doing, I don't want you to just blindly follow steps. If you half-way know what you’re doing, you will be much better prepared if you ever need to troubleshoot.
Honestly, once you have all the hardware ready it shouldn't take more than a weekend to get everything up and running.
(This tutorial is just shy of seven thousand words long so the rest is under the cut.)
Step One: Choosing Your Hardware
Linux is a light weight operating system, depending on the distribution there's close to no bloat. There are recent distributions available at this very moment that will run perfectly fine on a fourteen year old i3 with 4GB of RAM. Moreover, running Plex or Jellyfin isn’t resource intensive in 90% of use cases. All this is to say, we don’t require an expensive or powerful computer. This means that there are several options available: 1) use an old computer you already have sitting around but aren't using 2) buy a used workstation from eBay, or what I believe to be the best option, 3) order an N100 Mini-PC from AliExpress or Amazon.
Note: If you already have an old PC sitting around that you’ve decided to use, fantastic, move on to the next step.
When weighing your options, keep a few things in mind: the number of people you expect to be streaming simultaneously at any one time, the resolution and bitrate of your media library (4k video takes a lot more processing power than 1080p) and most importantly, how many of those clients are going to be transcoding at any one time. Transcoding is what happens when the playback device does not natively support direct playback of the source file. This can happen for a number of reasons, such as the playback device's native resolution being lower than the file's internal resolution, or because the source file was encoded in a video codec unsupported by the playback device.
Ideally we want any transcoding to be performed by hardware. This means we should be looking for a computer with an Intel processor with Quick Sync. Quick Sync is a dedicated core on the CPU die designed specifically for video encoding and decoding. This specialized hardware makes for highly efficient transcoding both in terms of processing overhead and power draw. Without these Quick Sync cores, transcoding must be brute forced through software. This takes up much more of a CPU’s processing power and requires much more energy. But not all Quick Sync cores are created equal and you need to keep this in mind if you've decided either to use an old computer or to shop for a used workstation on eBay
Any Intel processor from second generation Core (Sandy Bridge circa 2011) onwards has Quick Sync cores. It's not until 6th gen (Skylake), however, that the cores support the H.265 HEVC codec. Intel’s 10th gen (Comet Lake) processors introduce support for 10bit HEVC and HDR tone mapping. And the recent 12th gen (Alder Lake) processors brought with them hardware AV1 decoding. As an example, while an 8th gen (Kaby Lake) i5-8500 will be able to hardware transcode a H.265 encoded file, it will fall back to software transcoding if given a 10bit H.265 file. If you’ve decided to use that old PC or to look on eBay for an old Dell Optiplex keep this in mind.
Note 1: The price of old workstations varies wildly and fluctuates frequently. If you get lucky and go shopping shortly after a workplace has liquidated a large number of their workstations you can find deals for as low as $100 on a barebones system, but generally an i5-8500 workstation with 16gb RAM will cost you somewhere in the area of $260 CDN/$200 USD.
Note 2: The AMD equivalent to Quick Sync is called Video Core Next, and while it's fine, it's not as efficient and not as mature a technology. It was only introduced with the first generation Ryzen CPUs and it only got decent with their newest CPUs, we want something cheap.
Alternatively you could forgo having to keep track of what generation of CPU is equipped with Quick Sync cores that feature support for which codecs, and just buy an N100 mini-PC. For around the same price or less of a used workstation you can pick up a Mini-PC with an Intel N100 processor. The N100 is a four-core processor based on the 12th gen Alder Lake architecture and comes equipped with the latest revision of the Quick Sync cores. These little processors offer astounding hardware transcoding capabilities for their size and power draw. Otherwise they perform equivalent to an i5-6500, which isn't a terrible CPU. A friend of mine uses an N100 machine as a dedicated retro emulation gaming system and it does everything up to 6th generation consoles just fine. The N100 is also a remarkably efficient chip, it sips power. In fact, the difference between running one of these and an old workstation could work out to hundreds of dollars a year in energy bills depending on where you live.
You can find these Mini-PCs all over Amazon or for a little cheaper on AliExpress. They range in price from $170 CDN/$125 USD for a no name N100 with 8GB RAM to $280 CDN/$200 USD for a Beelink S12 Pro with 16GB RAM. The brand doesn't really matter, they're all coming from the same three factories in Shenzen, go for whichever one fits your budget or has features you want. 8GB RAM should be enough, Linux is lightweight and Plex only calls for 2GB RAM. 16GB RAM might result in a slightly snappier experience, especially with ZFS. A 256GB SSD is more than enough for what we need as a boot drive, but going for a bigger drive might allow you to get away with things like creating preview thumbnails for Plex, but it’s up to you and your budget.
The Mini-PC I wound up buying was a Firebat AK2 Plus with 8GB RAM and a 256GB SSD. It looks like this:
Tumblr media
Note: Be forewarned that if you decide to order a Mini-PC from AliExpress, note the type of power adapter it ships with. The mini-PC I bought came with an EU power adapter and I had to supply my own North American power supply. Thankfully this is a minor issue as a barrel plug 30W/12V/2.5A power adapters are plentiful and can be had for $10.
Step Two: Choosing Your Storage
Storage is the most important part of our build. It is also the most expensive. Thankfully it’s also the most easily upgrade-able down the line.
For people with a smaller media collection (4TB to 8TB), a more limited budget, or who will only ever have two simultaneous streams running, I would say that the most economical course of action would be to buy a USB 3.0 8TB external HDD. Something like this one from Western Digital or this one from Seagate. One of these external drives will cost you in the area of $200 CDN/$140 USD. Down the line you could add a second external drive or replace it with a multi-drive RAIDz set up such as detailed below.
If a single external drive the path for you, move on to step three.
For people with larger media libraries (12TB+), who prefer media in 4k, or care who about data redundancy, the answer is a RAID array featuring multiple HDDs in an enclosure.
Note: If you are using an old PC or used workstatiom as your server and have the room for at least three 3.5" drives, and as many open SATA ports on your mother board you won't need an enclosure, just install the drives into the case. If your old computer is a laptop or doesn’t have room for more internal drives, then I would suggest an enclosure.
The minimum number of drives needed to run a RAIDz array is three, and seeing as RAIDz is what we will be using, you should be looking for an enclosure with three to five bays. I think that four disks makes for a good compromise for a home server. Regardless of whether you go for a three, four, or five bay enclosure, do be aware that in a RAIDz array the space equivalent of one of the drives will be dedicated to parity at a ratio expressed by the equation 1 − 1/n i.e. in a four bay enclosure equipped with four 12TB drives, if we configured our drives in a RAIDz1 array we would be left with a total of 36TB of usable space (48TB raw size). The reason for why we might sacrifice storage space in such a manner will be explained in the next section.
A four bay enclosure will cost somewhere in the area of $200 CDN/$140 USD. You don't need anything fancy, we don't need anything with hardware RAID controls (RAIDz is done entirely in software) or even USB-C. An enclosure with USB 3.0 will perform perfectly fine. Don’t worry too much about USB speed bottlenecks. A mechanical HDD will be limited by the speed of its mechanism long before before it will be limited by the speed of a USB connection. I've seen decent looking enclosures from TerraMaster, Yottamaster, Mediasonic and Sabrent.
When it comes to selecting the drives, as of this writing, the best value (dollar per gigabyte) are those in the range of 12TB to 20TB. I settled on 12TB drives myself. If 12TB to 20TB drives are out of your budget, go with what you can afford, or look into refurbished drives. I'm not sold on the idea of refurbished drives but many people swear by them.
When shopping for harddrives, search for drives designed specifically for NAS use. Drives designed for NAS use typically have better vibration dampening and are designed to be active 24/7. They will also often make use of CMR (conventional magnetic recording) as opposed to SMR (shingled magnetic recording). This nets them a sizable read/write performance bump over typical desktop drives. Seagate Ironwolf and Toshiba NAS are both well regarded brands when it comes to NAS drives. I would avoid Western Digital Red drives at this time. WD Reds were a go to recommendation up until earlier this year when it was revealed that they feature firmware that will throw up false SMART warnings telling you to replace the drive at the three year mark quite often when there is nothing at all wrong with that drive. It will likely even be good for another six, seven, or more years.
Tumblr media
Step Three: Installing Linux
For this step you will need a USB thumbdrive of at least 6GB in capacity, an .ISO of Ubuntu, and a way to make that thumbdrive bootable media.
First download a copy of Ubuntu desktop (for best performance we could download the Server release, but for new Linux users I would recommend against the server release. The server release is strictly command line interface only, and having a GUI is very helpful for most people. Not many people are wholly comfortable doing everything through the command line, I'm certainly not one of them, and I grew up with DOS 6.0. 22.04.3 Jammy Jellyfish is the current Long Term Service release, this is the one to get.
Download the .ISO and then download and install balenaEtcher on your Windows PC. BalenaEtcher is an easy to use program for creating bootable media, you simply insert your thumbdrive, select the .ISO you just downloaded, and it will create a bootable installation media for you.
Once you've made a bootable media and you've got your Mini-PC (or you old PC/used workstation) in front of you, hook it directly into your router with an ethernet cable, and then plug in the HDD enclosure, a monitor, a mouse and a keyboard. Now turn that sucker on and hit whatever key gets you into the BIOS (typically ESC, DEL or F2). If you’re using a Mini-PC check to make sure that the P1 and P2 power limits are set correctly, my N100's P1 limit was set at 10W, a full 20W under the chip's power limit. Also make sure that the RAM is running at the advertised speed. My Mini-PC’s RAM was set at 2333Mhz out of the box when it should have been 3200Mhz. Once you’ve done that, key over to the boot order and place the USB drive first in the boot order. Then save the BIOS settings and restart.
After you restart you’ll be greeted by Ubuntu's installation screen. Installing Ubuntu is really straight forward, select the "minimal" installation option, as we won't need anything on this computer except for a browser (Ubuntu comes preinstalled with Firefox) and Plex Media Server/Jellyfin Media Server. Also remember to delete and reformat that Windows partition! We don't need it.
Step Four: Installing ZFS and Setting Up the RAIDz Array
Note: If you opted for just a single external HDD skip this step and move onto setting up a Samba share.
Once Ubuntu is installed it's time to configure our storage by installing ZFS to build our RAIDz array. ZFS is a "next-gen" file system that is both massively flexible and massively complex. It's capable of snapshot backup, self healing error correction, ZFS pools can be configured with drives operating in a supplemental manner alongside the storage vdev (e.g. fast cache, dedicated secondary intent log, hot swap spares etc.). It's also a file system very amenable to fine tuning. Block and sector size are adjustable to use case and you're afforded the option of different methods of inline compression. If you'd like a very detailed overview and explanation of its various features and tips on tuning a ZFS array check out these articles from Ars Technica. For now we're going to ignore all these features and keep it simple, we're going to pull our drives together into a single vdev running in RAIDz which will be the entirety of our zpool, no fancy cache drive or SLOG.
Open up the terminal and type the following commands:
sudo apt update
then
sudo apt install zfsutils-linux
This will install the ZFS utility. Verify that it's installed with the following command:
zfs --version
Now, it's time to check that the HDDs we have in the enclosure are healthy, running, and recognized. We also want to find out their device IDs and take note of them:
sudo fdisk -1
Note: You might be wondering why some of these commands require "sudo" in front of them while others don't. "Sudo" is short for "super user do”. When and where "sudo" is used has to do with the way permissions are set up in Linux. Only the "root" user has the access level to perform certain tasks in Linux. As a matter of security and safety regular user accounts are kept separate from the "root" user. It's not advised (or even possible) to boot into Linux as "root" with most modern distributions. Instead by using "sudo" our regular user account is temporarily given the power to do otherwise forbidden things. Don't worry about it too much at this stage, but if you want to know more check out this introduction.
If everything is working you should get a list of the various drives detected along with their device IDs which will look like this: /dev/sdc. You can also check the device IDs of the drives by opening the disk utility app. Jot these IDs down as we'll need them for our next step, creating our RAIDz array.
RAIDz is similar to RAID-5 in that instead of striping your data over multiple disks, exchanging redundancy for speed and available space (RAID-0), or mirroring your data writing by two copies of every piece (RAID-1), it instead writes parity blocks across the disks in addition to striping, this provides a balance of speed, redundancy and available space. If a single drive fails, the parity blocks on the working drives can be used to reconstruct the entire array as soon as a replacement drive is added.
Additionally, RAIDz improves over some of the common RAID-5 flaws. It's more resilient and capable of self healing, as it is capable of automatically checking for errors against a checksum. It's more forgiving in this way, and it's likely that you'll be able to detect when a drive is dying well before it fails. A RAIDz array can survive the loss of any one drive.
Note: While RAIDz is indeed resilient, if a second drive fails during the rebuild, you're fucked. Always keep backups of things you can't afford to lose. This tutorial, however, is not about proper data safety.
To create the pool, use the following command:
sudo zpool create "zpoolnamehere" raidz "device IDs of drives we're putting in the pool"
For example, let's creatively name our zpool "mypool". This poil will consist of four drives which have the device IDs: sdb, sdc, sdd, and sde. The resulting command will look like this:
sudo zpool create mypool raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde
If as an example you bought five HDDs and decided you wanted more redundancy dedicating two drive to this purpose, we would modify the command to "raidz2" and the command would look something like the following:
sudo zpool create mypool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
An array configured like this is known as RAIDz2 and is able to survive two disk failures.
Once the zpool has been created, we can check its status with the command:
zpool status
Or more concisely with:
zpool list
The nice thing about ZFS as a file system is that a pool is ready to go immediately after creation. If we were to set up a traditional RAID-5 array using mbam, we'd have to sit through a potentially hours long process of reformatting and partitioning the drives. Instead we're ready to go right out the gates.
The zpool should be automatically mounted to the filesystem after creation, check on that with the following:
df -hT | grep zfs
Note: If your computer ever loses power suddenly, say in event of a power outage, you may have to re-import your pool. In most cases, ZFS will automatically import and mount your pool, but if it doesn’t and you can't see your array, simply open the terminal and type sudo zpool import -a.
By default a zpool is mounted at /"zpoolname". The pool should be under our ownership but let's make sure with the following command:
sudo chown -R "yourlinuxusername" /"zpoolname"
Note: Changing file and folder ownership with "chown" and file and folder permissions with "chmod" are essential commands for much of the admin work in Linux, but we won't be dealing with them extensively in this guide. If you'd like a deeper tutorial and explanation you can check out these two guides: chown and chmod.
Tumblr media
You can access the zpool file system through the GUI by opening the file manager (the Ubuntu default file manager is called Nautilus) and clicking on "Other Locations" on the sidebar, then entering the Ubuntu file system and looking for a folder with your pool's name. Bookmark the folder on the sidebar for easy access.
Tumblr media
Your storage pool is now ready to go. Assuming that we already have some files on our Windows PC we want to copy to over, we're going to need to install and configure Samba to make the pool accessible in Windows.
Step Five: Setting Up Samba/Sharing
Samba is what's going to let us share the zpool with Windows and allow us to write to it from our Windows machine. First let's install Samba with the following commands:
sudo apt-get update
then
sudo apt-get install samba
Next create a password for Samba.
sudo smbpswd -a "yourlinuxusername"
It will then prompt you to create a password. Just reuse your Ubuntu user password for simplicity's sake.
Note: if you're using just a single external drive replace the zpool location in the following commands with wherever it is your external drive is mounted, for more information see this guide on mounting an external drive in Ubuntu.
After you've created a password we're going to create a shareable folder in our pool with this command
mkdir /"zpoolname"/"foldername"
Now we're going to open the smb.conf file and make that folder shareable. Enter the following command.
sudo nano /etc/samba/smb.conf
This will open the .conf file in nano, the terminal text editor program. Now at the end of smb.conf add the following entry:
["foldername"]
path = /"zpoolname"/"foldername"
available = yes
valid users = "yourlinuxusername"
read only = no
writable = yes
browseable = yes
guest ok = no
Ensure that there are no line breaks between the lines and that there's a space on both sides of the equals sign. Our next step is to allow Samba traffic through the firewall:
sudo ufw allow samba
Finally restart the Samba service:
sudo systemctl restart smbd
At this point we'll be able to access to the pool, browse its contents, and read and write to it from Windows. But there's one more thing left to do, Windows doesn't natively support the ZFS file systems and will read the used/available/total space in the pool incorrectly. Windows will read available space as total drive space, and all used space as null. This leads to Windows only displaying a dwindling amount of "available" space as the drives are filled. We can fix this! Functionally this doesn't actually matter, we can still write and read to and from the disk, it just makes it difficult to tell at a glance the proportion of used/available space, so this is an optional step but one I recommend (this step is also unnecessary if you're just using a single external drive). What we're going to do is write a little shell script in #bash. Open nano with the terminal with the command:
nano
Now insert the following code:
#!/bin/bash CUR_PATH=`pwd` ZFS_CHECK_OUTPUT=$(zfs get type $CUR_PATH 2>&1 > /dev/null) > /dev/null if [[ $ZFS_CHECK_OUTPUT == *not\ a\ ZFS* ]] then IS_ZFS=false else IS_ZFS=true fi if [[ $IS_ZFS = false ]] then df $CUR_PATH | tail -1 | awk '{print $2" "$4}' else USED=$((`zfs get -o value -Hp used $CUR_PATH` / 1024)) > /dev/null AVAIL=$((`zfs get -o value -Hp available $CUR_PATH` / 1024)) > /dev/null TOTAL=$(($USED+$AVAIL)) > /dev/null echo $TOTAL $AVAIL fi
Save the script as "dfree.sh" to /home/"yourlinuxusername" then change the ownership of the file to make it executable with this command:
sudo chmod 774 dfree.sh
Now open smb.conf with sudo again:
sudo nano /etc/samba/smb.conf
Now add this entry to the top of the configuration file to direct Samba to use the results of our script when Windows asks for a reading on the pool's used/available/total drive space:
[global]
dfree command = home/"yourlinuxusername"/defree.sh
Save the changes to smb.conf and then restart Samba again with the terminal:
sudo systemctl restart smbd
Now there’s one more thing we need to do to fully set up the Samba share, and that’s to modify a hidden group permission. In the terminal window type the following command:
usermod -a -G sambashare “yourlinuxusername”
Then restart samba again:
sudo systemctl restart smbd
If we don’t do this last step, everything will appear to work fine, and you will even be able to see and map the drive from Windows and even begin transferring files, but you'd soon run into a lot of frustration. As every ten minutes or so a file would fail to transfer and you would get a window announcing “0x8007003B Unexpected Network Error”. This window would require your manual input to continue the transfer with the file next in the queue. And at the end it would reattempt to transfer whichever files failed the first time around. 99% of the time they’ll go through that second try, but this is still all a major pain in the ass. Especially if you’ve got a lot of data to transfer or you want to step away from the computer for a while.
It turns out samba can act a little weirdly with the higher read/write speeds of RAIDz arrays and transfers from Windows, and will intermittently crash and restart itself if this group option isn’t changed. Inputting the above command will prevent you from ever seeing that window.
The last thing we're going to do before switching over to our Windows PC is grab the IP address of our Linux machine. Enter the following command:
hostname -I
This will spit out this computer's IP address on the local network (it will look something like 192.168.0.x), write it down. It might be a good idea once you're done here to go into your router settings and reserving that IP for your Linux system in the DHCP settings. Check the manual for your specific model router on how to access its settings, typically it can be accessed by opening a browser and typing http:\\192.168.0.1 in the address bar, but your router may be different.
Okay we’re done with our Linux computer for now. Get on over to your Windows PC, open File Explorer, right click on Network and click "Map network drive". Select Z: as the drive letter (you don't want to map the network drive to a letter you could conceivably be using for other purposes) and enter the IP of your Linux machine and location of the share like so: \\"LINUXCOMPUTERLOCALIPADDRESSGOESHERE"\"zpoolnamegoeshere"\. Windows will then ask you for your username and password, enter the ones you set earlier in Samba and you're good. If you've done everything right it should look something like this:
Tumblr media
You can now start moving media over from Windows to the share folder. It's a good idea to have a hard line running to all machines. Moving files over Wi-Fi is going to be tortuously slow, the only thing that’s going to make the transfer time tolerable (hours instead of days) is a solid wired connection between both machines and your router.
Step Six: Setting Up Remote Desktop Access to Your Server
After the server is up and going, you’ll want to be able to access it remotely from Windows. Barring serious maintenance/updates, this is how you'll access it most of the time. On your Linux system open the terminal and enter:
sudo apt install xrdp
Then:
sudo systemctl enable xrdp
Once it's finished installing, open “Settings” on the sidebar and turn off "automatic login" in the User category. Then log out of your account. Attempting to remotely connect to your Linux computer while you’re logged in will result in a black screen!
Now get back on your Windows PC, open search and look for "RDP". A program called "Remote Desktop Connection" should pop up, open this program as an administrator by right-clicking and selecting “run as an administrator”. You’ll be greeted with a window. In the field marked “Computer” type in the IP address of your Linux computer. Press connect and you'll be greeted with a new window and prompt asking for your username and password. Enter your Ubuntu username and password here.
Tumblr media
If everything went right, you’ll be logged into your Linux computer. If the performance is sluggish, adjust the display options. Lowering the resolution and colour depth do a lot to make the interface feel snappier.
Tumblr media
Remote access is how we're going to be using our Linux system from now, barring edge cases like needing to get into the BIOS or upgrading to a new version of Ubuntu. Everything else from performing maintenance like a monthly zpool scrub (this is important!!!) to checking zpool status and updating software can all be done remotely.
Tumblr media
This is how my server lives its life now, happily humming and chirping away on the floor next to the couch in a corner of the living room.
Step Seven: Plex Media Server/Jellyfin
Okay we’ve got all the ground work finished and our server is almost up and running. We’ve got Ubuntu up and running, our storage array is primed, we’ve set up remote connections and sharing, and maybe we’ve moved over some of favourite movies and TV shows.
Now we need to decide on the media server software to use which will stream our media to us and organize our library. For most people I’d recommend Plex. It just works 99% of the time. That said, Jellyfin has a lot to recommend it by too, even if it is rougher around the edges. Some people run both simultaneously, it’s not that big of an extra strain. I do recommend doing a little bit of your own research into the features each platform offers, but as a quick run down, consider some of the following points:
Plex is closed source and is funded through PlexPass purchases while Jellyfin is open source and entirely user driven. This means a number of things: for one, Plex requires you to purchase a “PlexPass” (purchased as a one time lifetime fee $159.99 CDN/$120 USD or paid for on a monthly or yearly subscription basis) in order to access to certain features, like hardware transcoding (and we want hardware transcoding) or automated intro/credits detection and skipping. jellyfish features for free. On the other hand, Plex supports a lot more devices than Jellyfin and updates more frequently. That said Jellyfin's Android/iOS apps are completely free, while the Plex Android/iOS apps must be activated for a one time cost of $6 CDN/$5 USD. But that $6 fee gets you a mobile app that is much more functional and features a unified UI across Android and iOS platforms, the Plex mobile apps are simply a more polished experience. The Jellyfin apps are a bit of a mess and the iOS and Android versions are very different from each other.
Jellyfin’s actual media player itself is more fully featured than Plex's, but on the other hand Jellyfin's UI, library customization and automatic media tagging really pale in comparison to Plex. Streaming your music library is free through both Jellyfin and Plex, but Plex offers the PlexAmp app for dedicated music streaming which boasts a number of fantastic features, unfortunately some of those fantastic features require a PlexPass. If your internet is down, Jellyfin can still do local streaming, while Plex can fail to play files. Jellyfin has a slew of neat niche features like support for Comic Book libraries with the .cbz/.cbt file types, but then Plex offers some free ad-supported TV and films, they even have a free channel that plays nothing but Classic Doctor Who.
Ultimately it's up to you, I settled on Plex because although some features are pay-walled, it just works. It's more reliable and easier to use, and a one-time fee is much easier to swallow than a subscription. I do also need to mention that Jellyfin does take a little extra bit of tinkering to get going in Ubuntu, you’ll have to set up process permissions, so if you're more tolerant to tinkering, Jellyfin might be up your alley and I’ll trust that you can follow their installation and configuration guide. For everyone else, I recommend Plex.
So pick your poison: Plex or Jellyfin.
Note: The easiest way to download and install either of these packages in Ubuntu is through Snap Store.
After you've installed one (or both), opening either app will launch a browser window into the browser version of the app allowing you to set all the options server side.
The process of adding creating media libraries is essentially the same in both Plex and Jellyfin. You create a separate libraries for Television, Movies, and Music and add the folders which contain the respective types of media to their respective libraries. The only difficult or time consuming aspect is ensuring that your files and folders follow the appropriate naming conventions:
Plex naming guide for Movies
Plex naming guide for Television
Jellyfin follows the same naming rules but I find their media scanner to be a lot less accurate and forgiving than Plex. Once you've selected the folders to be scanned the service will scan your files, tagging everything and adding metadata. Although I find do find Plex more accurate, it can still erroneously tag some things and you might have to manually clean up some tags in a large library. (When I initially created my library it tagged the 1963-1989 Doctor Who as some Korean soap opera and I needed to manually select the correct match after which everything was tagged normally.) It can also be a bit testy with anime (especially OVAs) be sure to check TVDB to ensure that you have your files and folders structured and named correctly. If something is not showing up at all, double check the name.
Once that's done, organizing and customizing your library is easy. You can set up collections, grouping items together to fit a theme or collect together all the entries in a franchise. You can make playlists, and add custom artwork to entries. It's fun setting up collections with posters to match, there are even several websites dedicated to help you do this like PosterDB. As an example, below are two collections in my library, one collecting all the entries in a franchise, the other follows a theme.
Tumblr media
My Star Trek collection, featuring all eleven television series, and thirteen films.
Tumblr media
My Best of the Worst collection, featuring sixty-nine films previously showcased on RedLetterMedia’s Best of the Worst. They’re all absolutely terrible and I love them.
As for settings, ensure you've got Remote Access going, it should work automatically and be sure to set your upload speed after running a speed test. In the library settings set the database cache to 2000MB to ensure a snappier and more responsive browsing experience, and then check that playback quality is set to original/maximum. If you’re severely bandwidth limited on your upload and have remote users, you might want to limit the remote stream bitrate to something more reasonable, just as a note of comparison Netflix’s 1080p bitrate is approximately 5Mbps, although almost anyone watching through a chromium based browser is streaming at 720p and 3mbps. Other than that you should be good to go. For actually playing your files, there's a Plex app for just about every platform imaginable. I mostly watch television and films on my laptop using the Windows Plex app, but I also use the Android app which can broadcast to the chromecast connected to the TV. Both are fully functional and easy to navigate, and I can also attest to the OS X version being equally functional.
Part Eight: Finding Media
Now, this is not really a piracy tutorial, there are plenty of those out there. But if you’re unaware, BitTorrent is free and pretty easy to use, just pick a client (qBittorrent is the best) and go find some public trackers to peruse. Just know now that all the best trackers are private and invite only, and that they can be exceptionally difficult to get into. I’m already on a few, and even then, some of the best ones are wholly out of my reach.
If you decide to take the left hand path and turn to Usenet you’ll have to pay. First you’ll need to sign up with a provider like Newshosting or EasyNews for access to Usenet itself, and then to actually find anything you’re going to need to sign up with an indexer like NZBGeek or NZBFinder. There are dozens of indexers, and many people cross post between them, but for more obscure media it’s worth checking multiple. You’ll also need a binary downloader like SABnzbd. That caveat aside, Usenet is faster, bigger, older, less traceable than BitTorrent, and altogether slicker. I honestly prefer it, and I'm kicking myself for taking this long to start using it because I was scared off by the price. I’ve found so many things on Usenet that I had sought in vain elsewhere for years, like a 2010 Italian film about a massacre perpetrated by the SS that played the festival circuit but never received a home media release; some absolute hero uploaded a rip of a festival screener DVD to Usenet, that sort of thing. Anyway, figure out the rest of this shit on your own and remember to use protection, get yourself behind a VPN, use a SOCKS5 proxy with your BitTorrent client, etc.
On the legal side of things, if you’re around my age, you (or your family) probably have a big pile of DVDs and Blu-Rays sitting around unwatched and half forgotten. Why not do a bit of amateur media preservation, rip them and upload them to your server for easier access? (Your tools for this are going to be Handbrake to do the ripping and AnyDVD to break any encryption.) I went to the trouble of ripping all my SCTV DVDs (five box sets worth) because none of it is on streaming nor could it be found on any pirate source I tried. I’m glad I did, forty years on it’s still one of the funniest shows to ever be on TV.
Part Nine/Epilogue: Sonarr/Radarr/Lidarr and Overseerr
There are a lot of ways to automate your server for better functionality or to add features you and other users might find useful. Sonarr, Radarr, and Lidarr are a part of a suite of “Servarr” services (there’s also Readarr for books and Whisparr for adult content) that allow you to automate the collection of new episodes of TV shows (Sonarr), new movie releases (Radarr) and music releases (Lidarr). They hook in to your BitTorrent client or Usenet binary newsgroup downloader and crawl your preferred Torrent trackers and Usenet indexers, alerting you to new releases and automatically grabbing them. You can also use these services to manually search for new media, and even replace/upgrade your existing media with better quality uploads. They’re really a little tricky to set up on a bare metal Ubuntu install (ideally you should be running them in Docker Containers), and I won’t be providing a step by step on installing and running them, I’m simply making you aware of their existence.
The other bit of kit I want to make you aware of is Overseerr which is a program that scans your Plex media library and will serve recommendations based on what you like. It also allows you and your users to request specific media. It can even be integrated with Sonarr/Radarr/Lidarr so that fulfilling those requests is fully automated.
And you're done. It really wasn't all that hard. Enjoy your media. Enjoy the control you have over that media. And be safe in the knowledge that no hedgefund CEO motherfucker who hates the movies but who is somehow in control of a major studio will be able to disappear anything in your library as a tax write-off.
864 notes · View notes
Text
Utah’s getting some of America’s best broadband
Tumblr media
TOMORROW (May 17), I'm at the INTERNET ARCHIVE in SAN FRANCISCO to keynote the 10th anniversary of the AUTHORS ALLIANCE.
Tumblr media
Residents of 21 cities in Utah have access to some of the fastest, most competitively priced broadband in the country, at speeds up to 10gb/s and prices as low as $75/month. It's uncapped, and the connections are symmetrical: perfect for uploading and downloading. And it's all thanks to the government.
This broadband service is, of course, delivered via fiber optic cable. Of course it is. Fiber is vastly superior to all other forms of broadband delivery, including satellites, but also cable and DSL. Fiber caps out at 100tb/s, while cable caps out at 50gb/s – that is, fiber is 1,000 times faster:
https://www.eff.org/deeplinks/2019/10/why-fiber-vastly-superior-cable-and-5g
Despite the obvious superiority of fiber, America has been very slow to adopt it. Our monopolistic carriers act as though pulling fiber to our homes is an impossible challenge. All those wires that currently go to your house, from power-lines to copper phone-lines, are relics of a mysterious, fallen civilization and its long-lost arts. Apparently we could no more get a new wire to your house than we could build the pyramids using only hand-tools.
In a sense, the people who say we can't pull wires anymore are right: these are relics of a lost civilization. Specifically, electrification and later, universal telephone service was accomplished through massive federal grants under the New Deal – grants that were typically made to either local governments or non-profit co-operatives who got everyone in town connected to these essential modern utilities.
Today – thanks to decades of neoliberalism and its dogmatic insistence that governments can't do anything and shouldn't try, lest they break the fragile equilibrium of the market – we have lost much of the public capacity that our grandparents took for granted. But in the isolated pockets where this capacity lives on, amazing things happen.
Since 2015, residents of Jackson County, KY – one of the poorest counties in America – have enjoyed some of the country's fastest, cheapest, most reliable broadband. The desperately poor Appalachian county is home to a rural telephone co-op, which grew out of its rural electrification co-op, and it used a combination of federal grants and local capacity to bring fiber to every home in the county, traversing dangerous mountain passes with a mule named "Ole Bub" to reach the most remote homes. The result was an immediately economic uplift for the community, and in the longer term, the county had reliable and effective broadband during the covid lockdowns:
https://www.newyorker.com/tech/annals-of-technology/the-one-traffic-light-town-with-some-of-the-fastest-internet-in-the-us
Contrast this with places where the private sector has the only say over who gets broadband, at what speed, and at what price. America is full of broadband deserts – deserts that strand our poorest people. Even in the hearts of our largest densest cities, whole neighborhoods can't get any broadband. You won't be surprised to learn that these are the neighborhoods that were historically redlined, and that the people who live in them are Black and brown, and also live with some of the highest levels of pollution and its attendant sicknesses:
https://pluralistic.net/2021/06/10/flicc/#digital-divide
These places are not set up for success under the best of circumstances, and during the lockdowns, they suffered terribly. You think your kid found it hard to go to Zoom school? Imagine what life was like for kids who attended remote learning while sitting on the baking tarmac in a Taco Bell parking lot, using its free wifi:
https://www.wsws.org/en/articles/2020/09/02/elem-s02.html
ISPs loathe competition. They divide up the country into exclusive territories like the Pope dividing up the "new world" and do not trouble one another by trying to sell to customers outside of "their" turf. When Frontier – one of the worst of America's terrible ISPs – went bankrupt, we got to see their books, and we learned two important facts:
The company booked one million customers who had no alternative as an asset, because they would pay more for slower broadband, and Frontier could save a fortune by skipping maintenance, and charging these customers for broadband even through multi-day outages; and
Frontier knew that it could make a billion dollars in profit over a decade by investing in fiber build-out, but it chose not to, because stock analysts will downrank any carrier that made capital investments that took more than five years to mature. Because Frontier's execs were paid primarily in stock, they chose to strand their customers with aging copper connections and to leave a billion dollars sitting on the table, so that their personal net worth didn't suffer a temporary downturn:
https://www.eff.org/deeplinks/2020/04/frontiers-bankruptcy-reveals-cynical-choice-deny-profitable-fiber-millions
ISPs maintain the weirdest position: that a) only the private sector can deliver broadband effectively, but b) to do so, they'll need massive, unsupervised, no-strings-attached government handouts. For years, America went along with this improbable scheme, which is why Trump's FCC chairman Ajit Pai gave the carriers $45 billion in public funds to string slow, 19th-century-style copper lines across rural America:
https://pluralistic.net/2022/02/27/all-broadband-politics-are-local/
Now, this is obviously untrue, and people keep figuring out that publicly provisioned broadband is the only way for America to get the same standard of broadband connectivity that our cousins in other high-income nations enjoy. In order to thwart the public's will, the cable and telco lobbyists joined ALEC, the far-right, corporatist lobbying shop, and drafted "model legislation" banning cities and counties from providing broadband, even in places the carriers chose not to serve:
https://pluralistic.net/2023/03/19/culture-war-bullshit-stole-your-broadband/
Red states across America adopted these rules, and legislators sold this to their base by saying that this was just "keeping the government out of their internet" (even as every carrier relied on an exclusive, government-granted territorial charter, often with massive government subsidies).
ALEC didn't target red states exclusively because they had pliable, bribable conservative lawmakers. Red states trend rural, and rural places are the most likely sites for public fiber. Partly, that's because low-density areas are harder to make a business case for, but also because these are also the places that got electricity and telephone through New Deal co-ops, which are often still in place.
Just about the only places in America where people like their internet service are the 450+ small towns where the local government provides fiber. These places vote solidly Republican, and it was their beloved conservative lawmakers whom ALEC targeted to enact laws banning their equally beloved fiber – keep voting for Christmas, turkeys, and see where it gets you:
https://communitynets.org/content/community-network-map
But spare a little sympathy for the conservative movement here. The fact that reality has a pronounced leftist bias must be really frustrating for the ideological project of insisting that anything the market can't provide is literally impossible.
Which brings me back to Utah, a red state with a Republican governor and legislature, and a national leader in passing unconstitutional, unhinged, unworkable legislation as part of an elaborate culture war kabuki:
https://www.npr.org/2023/03/24/1165975112/utah-passes-an-age-verification-law-for-anyone-using-social-media
For more than two decades, a coalition of 21 cities in Utah have been building out municipal fiber. The consortium calls itself UTOPIA: "Utah Telecommunication Open Infrastructure Agency":
https://www.utopiafiber.com/faqs/
UTOPIA pursues a hybrid model: they run "open access" fiber and then let anyone offer service over it. This can deliver the best of both worlds: publicly provisioned, blazing-fast fiber to your home, but with service provided by your choice of competing carriers. That means that if Moms for Liberty captures you local government, you're not captive to their ideas about what sites your ISP should block.
As Karl Bode writes for Techdirt, Utahns in UTOPIA regions have their choice of 18 carriers, and competition has driven down prices and increased speeds. Want uncapped 1gb fiber? That's $75/month. Want 10gb fiber? That's $150:
https://www.techdirt.com/2024/05/15/utah-locals-are-getting-cheap-10-gbps-fiber-thanks-to-local-governments/
UTOPIA's path to glory wasn't an easy one. The dismal telco monopolists Qwest and Lumen sued to put them out of business, delaying the rollout by years:
https://www.deseret.com/2005/7/22/19903471/utopia-responds-to-qwest-lawsuit/
UTOPIA has been profitable and self-sustaining for over 15 years and shows no sign of slowing. But 17 states still ban any attempt at this.
Keeping up such an obviously bad policy requires a steady stream of distractions and lies. The "government broadband doesn't work" lie has worn thin, so we've gotten a string of new lies about wireless service, insisting that fiber is obviated by point-to-point microwave relays, or 5g, or satellite service.
There's plenty of places where these services make sense. You're not going to be able to use fiber in a moving car, so yeah, you're going to want 5g (and those 5g towers are going to need to be connected to each other with fiber). Microwave relay service can fill the gap until fiber can be brought in, and it's great for temporary sites (especially in places where it doesn't rain, because rain, clouds, leaves and other obstructions are deadly for microwave relays). Satellite can make sense for an RV or a boat or remote scientific station.
But wireless services are orders of magnitude slower than fiber. With satellite service, you share your bandwidth with an entire region or even a state. If there's only a couple of users in your satellite's footprint, you might get great service, but when your carrier adds a thousand more customers, your connection is sliced into a thousand pieces.
That's also true for everyone sharing your fiber trunk, but the difference is that your fiber trunk supports speeds that are tens of thousands of times faster than the maximum speeds we can put through freespace electromagnetic spectrum. If we need more fiber capacity, we can just fish a new strand of fiber through the conduit. And while you can increase the capacity of wireless by increasing your power and bandwidth, at a certain point you start pump so much EM into the air that birds start falling out of the sky.
Every wireless device in a region shares the same electromagnetic spectrum, and we are only issued one such spectrum per universe. Each strand of fiber, by contrast, has its own little pocket universe, containing a subset of that spectrum.
Despite all its disadvantages, satellite broadband has one distinct advantage, at least from an investor's perspective: it can be monopolized. Just as we only have one electromagnetic spectrum, we also only have one sky, and the satellite density needed to sustain a colorably fast broadband speed pushes the limit of that shared sky:
https://spacenews.com/starlink-vs-the-astronomers/
Private investors love monopoly telecoms providers, because, like pre-bankruptcy Frontier, they are too big to care. Back in 2021, Altice – the fourth-largest cable operator in America – announced that it was slashing its broadband speeds, to be "in line with other ISPs":
https://pluralistic.net/2021/06/27/immortan-altice/#broadband-is-a-human-right
In other words: "We've figured out that our competitors are so much worse than we are that we are deliberately degrading our service because we know you will still pay us the same for less."
This is why corporate shills and pro-monopolists prefer satellite to municipal fiber. Sure, it's orders of magnitude slower than fiber. Sure, it costs subscribers far more. Sure, it's less reliable. But boy oh boy is it profitable.
The thing is, reality has a pronounced leftist bias. No amount of market magic will conjure up new electromagnetic spectra that will allow satellite to attain parity with fiber. Physics hates Starlink.
Yeah, I'm talking about Starlink. Of course I am. Elon Musk basically claims that his business genius can triumph over physics itself.
That's not the only vast, impersonal, implacable force that Musk claims he can best with his incredible reality-distortion field. Musk also claims that he can somehow add so many cars to the road that he will end traffic – in other words, he will best geometry too:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Geometry hates Tesla, and physics hates Starlink. Reality has a leftist bias. The future is fiber, and public transit. These are both vastly preferable, more efficient, safer, more reliable and more plausible than satellite and private vehicles. Their only disadvantage is that they fail to give an easily gulled, thin-skinned compulsive liar more power over billions of people. That's a disadvantage I can live with.
Tumblr media
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/05/16/symmetrical-10gb-for-119/#utopia
Tumblr media
Image: 4028mdk09 (modified) https://commons.wikimedia.org/wiki/File:Rote_LED_Fiberglasleuchte.JPG
CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en
334 notes · View notes