#Remote Server Operating System
Explore tagged Tumblr posts
Text
Unleash Your Windows Applications Anywhere: RHosting's HTML Version
In the ever-evolving landscape of remote server access, RHosting stands out as a beacon of versatility and innovation. Our HTML version platform offers a unique experience, allowing users to access Windows applications seamlessly on any device with any operating system. Whether you're using a MacBook, iPad, Android tablet, Linux machine, or any desktop or laptop with a respective OS, RHosting's HTML version ensures consistent, reliable access to your Windows applications, breaking free from the constraints of specific devices.
Embracing Versatility with RHosting's HTML Version
At the heart of RHosting's HTML version lies its extraordinary versatility. Unlike traditional remote access solutions that are limited to specific operating systems, our platform empowers users to harness the full potential of their Windows applications across a diverse array of devices. Whether you're on the go or working from the comfort of your home, RHosting's HTML version ensures that your applications are always within reach.
Seamless Data Transfer Capabilities
Delving deeper into the features of RHosting's HTML version, users will discover a standout element: the ability to upload and download data between their server and local machine. This unique functionality brings unprecedented convenience and flexibility to your fingertips, enabling effortless file transfer between devices. Whether you need to upload critical documents, download essential reports, or exchange vital files, RHosting's HTML version streamlines your data management process, enhancing your productivity and efficiency.
Experience Freedom and Flexibility
With RHosting's HTML version, we invite you to experience a new level of freedom and flexibility in accessing Windows applications. Our platform is designed with a steadfast commitment to cross-platform compatibility, ensuring that you can enjoy unhindered access to your applications, regardless of your device's operating system. Say goodbye to device restrictions and embrace the freedom to choose your preferred mode of operation with RHosting.
Unlock New Possibilities with RHosting
In conclusion, RHosting's HTML version redefines remote server access, offering unparalleled versatility and convenience. Whether you're a business professional, a student, or a creative enthusiast, our platform empowers you to unleash the full potential of your Windows applications on any device. Dive into the diverse, versatile world of RHosting, and unlock new possibilities in remote server access. Experience the power of RHosting's HTML version today.
Contact RHosting to learn more about our HTML version platform and start your journey towards seamless remote server access. Visit our website at www.rhosting.com .Join us in embracing the future of remote server access with RHosting.
0 notes
Text
Home Server: Why you shouldn't build one
Home Server: Why you shouldn't build one #homelab #homeserversetup #buildingahomeserver #serverhardwareandsoftware #DIYhomeserver #homeservervscloudstorage #homenetworkmanagement #homeserveroperatingsystems #remoteaccessserver #homeautomationserver
You can do so much running your own home server. It is a great tool for learning and actually storing your data. Many prefer to control their own file sharing, media streaming services, and also run their own web server hosting self-hosted services. However, let’s look at a home server from a slightly different angle – why you shouldn’t build your own home servers. Table of contentsWhat is a…
View On WordPress
#building a home server#DIY Home Server#home automation server#home network management#home server energy savings#home server operating systems#Home Server Setup#home server vs cloud storage#remote access server#server hardware and software
0 notes
Text
“The Fagin figure leading Elon Musk’s merry band of pubescent sovereignty pickpockets”

This week only, Barnes and Noble is offering 25% off pre-orders of my forthcoming novel Picks and Shovels. ENDS TODAY!.
While we truly live in an age of ascendant monsters who have hijacked our country, our economy, and our imaginations, there is one consolation: the small cohort of brilliant, driven writers who have these monsters' number, and will share it with us. Writers like Maureen Tkacik:
https://prospect.org/topics/maureen-tkacik/
Journalists like Wired's Vittoria Elliott, Leah Feiger, and Tim Marchman are absolutely crushing it when it comes to Musk's DOGE coup:
https://www.wired.com/author/vittoria-elliott/
And Nathan Tankus is doing incredible work all on his own, just blasting out scoop after scoop:
https://www.crisesnotes.com/
But for me, it was Tkacik – as usual – in the pages of The American Prospect who pulled it all together in a way that finally made it make sense, transforming the blitzkreig Muskian chaos into a recognizable playbook. While most of the coverage of Musk's wrecking crew has focused on the broccoli-haired Gen Z brownshirts who are wilding through the server rooms at giant, critical government agencies, Tkacik homes in on their boss, Tom Krause, whom she memorably dubs "the Fagin figure leading Elon Musk’s merry band of pubescent sovereignty pickpockets" (I told you she was a great writer!):
https://prospect.org/power/2025-02-06-private-equity-hatchet-man-leading-lost-boys-of-doge/
Krause is a private equity looter. He's the guy who basically invented the playbook for PE takeovers of large tech companies, from Broadcom to Citrix to VMWare, converting their businesses from selling things to renting them out, loading them up with junk fees, slashing quality, jacking up prices over and over, and firing everyone who was good at their jobs. He is a master enshittifier, an enshittification ninja.
Krause has an unerring instinct for making people miserable while making money. He oversaw the merger of Citrix and VMWare, creating a ghastly company called The Cloud Software Group, which sold remote working tools. Despite this, of his first official acts was to order all of his employees to stop working remotely. But then, after forcing his workers to drag their butts into work, move back across the country, etc, he reversed himself because he figured out he could sell off all of the company's office space for a tidy profit.
Krause canceled employee benefits, like thank you days for managers who pulled a lot of unpaid overtime, or bonuses for workers who upgraded their credentials. He also ended the company's practice of handing out swag as small gifts to workers, and then stiffed the company that made the swag, wontpaying a $437,574.97 invoice for all the tchotchkes the company had ordered. That's not the only supplier Krause stiffed: FinLync, a fintech company with a three-year contract with Krause's company, also had to sue to get paid.
Krause's isn't a canny operator who roots out waste: he's a guy who tears out all the wiring and then grudgingly restores the minimum needed to keep the machine running (no wonder Musk loves him, this is the Twitter playbook). As Tkacik reports, Krause fucked up the customer service and reliability systems that served Citrix's extremely large, corporate customers – the giant businesses that cut huge monthly checks to Citrix, whose CIOs received daily sales calls from his competitors.
Workers who serviced these customers, like disabled Air Force veteran David Morgan, who worked with big public agencies, were fired on one hour's notice, just before their stock options vested. The giant public agency customers he'd serviced later called him to complain that the only people they could get on the phone were subcontractors in Indian call centers who lacked the knowledge and authority to resolve their problems.
Last month, Citrix fired all of its customer support engineers. Citrix's military customers are being illegally routed to offshore customer support teams who are prohibited from working with the US military.
Citrix/VMWare isn't an exception. The carnage at these companies is indistinguishable from the wreck Krause made of Broadcom. In all these cases, Krause was parachuted in by private equity bosses, and he destroyed something useful to extract a giant, one-time profit, leaving behind a husk that no longer provides value to its customers or its employees.
This is the DOGE playbook. It's all about plunder: take something that was patiently, carefully built up over generations and burn it to the ground, warming yourself in the pyre, leaving nothing behind but ash. This is what private equity plunderers have been doing to the world's "advanced" economies since the Reagan years. They did it to airlines, family restaurants, funeral homes, dog groomers, toy stores, pharma, palliative care, dialysis, hospital beds, groceries, cars, and the internet.
Trump's a plunderer. He was elected by the plunderer class – like the crypto bros who want to run wild, transforming workers' carefully shepherded retirement savings into useless shitcoins, while the crypto bros run off with their perfectly cromulent "fiat" money. Musk is the apotheosis of this mindset, a guy who claims credit for other peoples' productive and useful businesses, replacing real engineering with financial engineering. Musk and Krause, they're like two peas in a pod.
That's why – according to anonymous DOGE employees cited by Tckacik – DOGE managers are hired for their capacity for cruelty: "The criteria for DOGE is how many you have fired, how much you enjoy firing people, and how little you care about the impact on peoples well being��No wonder Tom Krause was tapped for this. He’s their dream employee!"
The fact that Krause isn't well known outside of plunderer circles is absolutely a feature for him, not a bug. Scammers like Krause want to be admitted to polite society. This is why the Sacklers – the opioid crime family that kicked off the Oxy pandemic that's murdered more than 800,000 Americans so far – were so aggressive about keeping their association with their family business, Purdue Pharma, a secret. The Sacklers only wanted to be associated with the art galleries and museums they put their names over, and their lawyers threatened journalists for writing about their lives as billionaire drug pushers (I got one of those threats).
There's plenty of good reasons to be anonymous – if you're a whistleblower, say. But if you ever encounter a corporate executive who insists on anonymity, that's a wild danger sign. Take Pixsy, the scam "copyleft trolls" whose business depends on baiting people into making small errors when using images licensed under very early versions of the Creative Common licenses, and then threatening to sue them unless they pay hundreds or thousands of dollars:
https://pluralistic.net/2022/01/24/a-bug-in-early-creative-commons-licenses-has-enabled-a-new-breed-of-superpredator/
Kain Jones, the CEO of Pixsy, tried to threaten me under the EU's GDPR for revealing the names of the scammer on his payroll who sent me a legal threat, and the executive who ran the scam for his business (I say he tried to threaten me because I helped lobby for the GDPR and I know for a fact that this isn't a GDPR violation):
https://pluralistic.net/2022/02/13/an-open-letter-to-pixsy-ceo-kain-jones-who-keeps-sending-me-legal-threats/
These people understand that they are in the business of ripping people off, causing them grave and wholly unjust financial injury. They value their secrecy because they are in the business of making strangers righteously furious, and they understand that one of these strangers might just show up in their lives someday to confront them about their transgressions.
This is why Unitedhealthcare freaked out so hard about Luigi Mangione's assassination of CEO Brian Thompson – that's not how the game is supposed to be played. The people who sit in on executive row, destroying your lives, are supposed to be wholly insulated from the consequences of their actions. You're not supposed to know who they are, you're not supposed to be able to find them – of course.
But even more importantly, you're not supposed to be angry at them. They pose as mere software agents in an immortal colony organism called a Limited Liability Corporation, bound by the iron law of shareholder supremacy to destroy your life while getting very, very rich. It's not supposed to be personal. That's why Unitedhealthcare is threatening to sue a doctor who was yanked out of surgery on a cancer patient to be berated by a UHC rep for ordering a hospital stay for her patient:
https://gizmodo.com/unitedhealthcare-is-mad-about-in-luigi-we-trust-comments-under-a-doctors-viral-post-2000560543
UHC is angry that this surgeon, Austin's Dr Elisabeth Potter, went Tiktok-viral with her true story of how how chaotic and depraved and uncaring UHC is. UHC execs fear that Mangione made it personal, that he obliterated the accountability sink of the corporation and put the blame squarely where it belongs – on the (mostly) men at the top who make this call.
This is a point Adam Conover made in his latest Factually podcast, where he interviewed Propublica's T Christian Miller and Patrick Rucker:
https://www.youtube.com/watch?v=Y_5tDXRw8kg
Miller and Rucker published a blockbuster investigative report into Cigna's Evocore, a secret company that offers claims-denials as a service to America's biggest health insurers:
https://www.propublica.org/article/evicore-health-insurance-denials-cigna-unitedhealthcare-aetna-prior-authorizations
If you're the CEO of a health insurance company and you don't like how much you're paying out for MRIs or cancer treatment, you tell Evocore (which processes all your claim authorizations) and they turn a virtual dial that starts to reduce the number of MRIs your customers are allowed to have. This dial increases the likelihood that a claim or pre-authorization will be denied, which, in turn, makes doctors less willing to order them (even if they're medically necessary) and makes patients more likely to pay for them out of pocket.
Towards the end of the conversation, Miller and Rucker talk about how the rank-and-file people at an insurer don't get involved with the industry to murder people in order to enrich their shareholders. They genuinely want to help people. But executive row is different: those very wealthy people do believe their job is to kill people to save money, and get richer. Those people are personally to blame for the systemic problem. They are the ones who design and operate the system.
That's why naming the people who are personally responsible for these immoral, vicious acts is so important. That's why it's important that Wired and Propublica are unmasking the "pubescent sovereignty pickpockets" who are raiding the federal government under Krause's leadership:
https://projects.propublica.org/elon-musk-doge-tracker/
These people are committing grave crimes against the nation and its people. They should be known for this. It should follow them for the rest of their lives. It should be the lead in their obituaries. People who are introduced to them at parties should have a flash of recognition, hastily end the handshake, then turn on their heels and race to the bathroom to scrub their hands. For the rest of their lives.
Naming these people isn't enough to stop the plunder, but it helps. Yesterday, Marko Elez, the 25 year old avowed "eugenicist" who wanted to "normalize Indian hate" and could not be "[paid] to marry outside of my ethnicity," was shown the door. He's off the job. For the rest of his life, he will be the broccoli-haired brownshirt who got fired for his asinine, racist shitposting:
https://www.npr.org/2025/02/06/nx-s1-5289337/elon-musk-doge-treasury
After Krause's identity as the chief wrecker at DOGE was revealed, the brilliant Anna Merlan (author of Republic of Lies, the best book on conspiratorialism), wrote that "Now the whole country gets the experience of what it’s like when private equity buys the place you work":
https://bsky.app/profile/annamerlan.bsky.social/post/3lhepjkudcs2t
That's exactly it. We are witnessing a private equity-style plunder of the entire US government – of the USA itself. No one is better poised to write about this than Tkacik, because no one has private equity's number like Tkacik does:
https://pluralistic.net/2023/06/02/plunderers/#farben
Ironically, all this came down just as Trump announced that he was going to finally get rid of private equity's scammiest trick, the "carried interest" loophole that lets PE bosses (and, to a lesser extent, hedge fund managers) avoid billions in personal taxes:
https://archive.is/yKhvD
"Carried interest" has nothing to do with the interest rate – it's a law that was designed for 16th century sea captains who had an "interest" in the cargo they "carried":
https://pluralistic.net/2021/04/29/writers-must-be-paid/#carried-interest
Trump campaigned on killing this loophole in 2017, but Congress stopped him, after a lobbying blitz by the looter industry. It's possible that he genuinely wants to get rid of the carried interest loophole – he's nothing if not idiosyncratic, as the residents of Greenland can attest:
https://prospect.org/world/2025-02-07-letter-between-friendly-nations/
Even if he succeeds, looters and the "investor class" will get a huge giveaway under Trump, in the form of more tax giveaways and the dismantling of labor and environmental regulation. But it's far more likely that he won't succeed. Rather – as Yves Smith writes for Naked Capitalism – he'll do what he did with the Canada and Mexico tariffs: make a tiny, unimportant change and then lie and say he had done something revolutionary:
https://www.nakedcapitalism.com/2025/02/is-trump-serious-about-trying-to-close-the-private-equity-carried-interest-loophole.html
This has been a shitty month, and it's not gonna get better for a while. On my dark days, I worry that it won't get better during my lifetime. But at least we have people like Tkacik to chronicle it, explain it, put it in context. She's amazing, a whirlwind. The same day that her report on Krause dropped, the Prospect published another must-read piece by her, digging deep into Alex Jones's convoluted bankruptcy gambit:
https://prospect.org/justice/2025-02-06-crisis-actors-alex-jones-bankruptcy/
It lays bare the wild world of elite bankruptcy court, another critical conduit for protecting the immoral rich from their victims. The fact that Tkacik can explain both Krause and the elite bankruptcy system on the same day is beyond impressive.
We've got a lot of work ahead of ourselves. The people in charge of this system – whose names you must learn and never forget – aren't going to go easily. But at least we know who they are. We know what they're doing. We know how the scam works. It's not a flurry of incomprehensible actions – it's a playbook that killed Red Lobster, Toys R Us, and Sears. We don't have to follow that playbook.
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2025/02/07/broccoli-hair-brownshirts/#shameless
#pluralistic#Maureen Tkacik#the american prospect#corporate sociopaths#pixsy#luigi mangione#propublica#doge#coup#elon musk#guillotine watch#adam conover#private equity#citrix#tom krause#looters#marko elez
401 notes
·
View notes
Text
How I ditched streaming services and learned to love Linux: A step-by-step guide to building your very own personal media streaming server (V2.0: REVISED AND EXPANDED EDITION)
This is a revised, corrected and expanded version of my tutorial on setting up a personal media server that previously appeared on my old blog (donjuan-auxenfers). I expect that that post is still making the rounds (hopefully with my addendum on modifying group share permissions in Ubuntu to circumvent 0x8007003B "Unexpected Network Error" messages in Windows 10/11 when transferring files) but I have no way of checking. Anyway this new revised version of the tutorial corrects one or two small errors I discovered when rereading what I wrote, adds links to all products mentioned and is just more polished generally. I also expanded it a bit, pointing more adventurous users toward programs such as Sonarr/Radarr/Lidarr and Overseerr which can be used for automating user requests and media collection.
So then, what is this tutorial? This is a tutorial on how to build and set up your own personal media server using Ubuntu as an operating system and Plex (or Jellyfin) to not only manage your media, but to also stream that media to your devices both at home and abroad anywhere in the world where you have an internet connection. Its intent is to show you how building a personal media server and stuffing it full of films, TV, and music that you acquired through indiscriminate and voracious media piracy various legal methods will free you to completely ditch paid streaming services. No more will you have to pay for Disney+, Netflix, HBOMAX, Hulu, Amazon Prime, Peacock, CBS All Access, Paramount+, Crave or any other streaming service that is not named Criterion Channel. Instead whenever you want to watch your favourite films and television shows, you’ll have your own personal service that only features things that you want to see, with files that you have control over. And for music fans out there, both Jellyfin and Plex support music streaming, meaning you can even ditch music streaming services. Goodbye Spotify, Youtube Music, Tidal and Apple Music, welcome back unreasonably large MP3 (or FLAC) collections.
On the hardware front, I’m going to offer a few options catered towards different budgets and media library sizes. The cost of getting a media server up and running using this guide will cost you anywhere from $450 CAD/$325 USD at the low end to $1500 CAD/$1100 USD at the high end (it could go higher). My server was priced closer to the higher figure, but I went and got a lot more storage than most people need. If that seems like a little much, consider for a moment, do you have a roommate, a close friend, or a family member who would be willing to chip in a few bucks towards your little project provided they get access? Well that's how I funded my server. It might also be worth thinking about the cost over time, i.e. how much you spend yearly on subscriptions vs. a one time cost of setting up a server. Additionally there's just the joy of being able to scream "fuck you" at all those show cancelling, library deleting, hedge fund vampire CEOs who run the studios through denying them your money. Drive a stake through David Zaslav's heart.
On the software side I will walk you step-by-step through installing Ubuntu as your server's operating system, configuring your storage as a RAIDz array with ZFS, sharing your zpool to Windows with Samba, running a remote connection between your server and your Windows PC, and then a little about started with Plex/Jellyfin. Every terminal command you will need to input will be provided, and I even share a custom #bash script that will make used vs. available drive space on your server display correctly in Windows.
If you have a different preferred flavour of Linux (Arch, Manjaro, Redhat, Fedora, Mint, OpenSUSE, CentOS, Slackware etc. et. al.) and are aching to tell me off for being basic and using Ubuntu, this tutorial is not for you. The sort of person with a preferred Linux distro is the sort of person who can do this sort of thing in their sleep. Also I don't care. This tutorial is intended for the average home computer user. This is also why we’re not using a more exotic home server solution like running everything through Docker Containers and managing it through a dashboard like Homarr or Heimdall. While such solutions are fantastic and can be very easy to maintain once you have it all set up, wrapping your brain around Docker is a whole thing in and of itself. If you do follow this tutorial and had fun putting everything together, then I would encourage you to return in a year’s time, do your research and set up everything with Docker Containers.
Lastly, this is a tutorial aimed at Windows users. Although I was a daily user of OS X for many years (roughly 2008-2023) and I've dabbled quite a bit with various Linux distributions (mostly Ubuntu and Manjaro), my primary OS these days is Windows 11. Many things in this tutorial will still be applicable to Mac users, but others (e.g. setting up shares) you will have to look up for yourself. I doubt it would be difficult to do so.
Nothing in this tutorial will require feats of computing expertise. All you will need is a basic computer literacy (i.e. an understanding of what a filesystem and directory are, and a degree of comfort in the settings menu) and a willingness to learn a thing or two. While this guide may look overwhelming at first glance, it is only because I want to be as thorough as possible. I want you to understand exactly what it is you're doing, I don't want you to just blindly follow steps. If you half-way know what you’re doing, you will be much better prepared if you ever need to troubleshoot.
Honestly, once you have all the hardware ready it shouldn't take more than an afternoon or two to get everything up and running.
(This tutorial is just shy of seven thousand words long so the rest is under the cut.)
Step One: Choosing Your Hardware
Linux is a light weight operating system, depending on the distribution there's close to no bloat. There are recent distributions available at this very moment that will run perfectly fine on a fourteen year old i3 with 4GB of RAM. Moreover, running Plex or Jellyfin isn’t resource intensive in 90% of use cases. All this is to say, we don’t require an expensive or powerful computer. This means that there are several options available: 1) use an old computer you already have sitting around but aren't using 2) buy a used workstation from eBay, or what I believe to be the best option, 3) order an N100 Mini-PC from AliExpress or Amazon.
Note: If you already have an old PC sitting around that you’ve decided to use, fantastic, move on to the next step.
When weighing your options, keep a few things in mind: the number of people you expect to be streaming simultaneously at any one time, the resolution and bitrate of your media library (4k video takes a lot more processing power than 1080p) and most importantly, how many of those clients are going to be transcoding at any one time. Transcoding is what happens when the playback device does not natively support direct playback of the source file. This can happen for a number of reasons, such as the playback device's native resolution being lower than the file's internal resolution, or because the source file was encoded in a video codec unsupported by the playback device.
Ideally we want any transcoding to be performed by hardware. This means we should be looking for a computer with an Intel processor with Quick Sync. Quick Sync is a dedicated core on the CPU die designed specifically for video encoding and decoding. This specialized hardware makes for highly efficient transcoding both in terms of processing overhead and power draw. Without these Quick Sync cores, transcoding must be brute forced through software. This takes up much more of a CPU’s processing power and requires much more energy. But not all Quick Sync cores are created equal and you need to keep this in mind if you've decided either to use an old computer or to shop for a used workstation on eBay
Any Intel processor from second generation Core (Sandy Bridge circa 2011) onward has Quick Sync cores. It's not until 6th gen (Skylake), however, that the cores support the H.265 HEVC codec. Intel’s 10th gen (Comet Lake) processors introduce support for 10bit HEVC and HDR tone mapping. And the recent 12th gen (Alder Lake) processors brought with them hardware AV1 decoding. As an example, while an 8th gen (Kaby Lake) i5-8500 will be able to hardware transcode a H.265 encoded file, it will fall back to software transcoding if given a 10bit H.265 file. If you’ve decided to use that old PC or to look on eBay for an old Dell Optiplex keep this in mind.
Note 1: The price of old workstations varies wildly and fluctuates frequently. If you get lucky and go shopping shortly after a workplace has liquidated a large number of their workstations you can find deals for as low as $100 on a barebones system, but generally an i5-8500 workstation with 16gb RAM will cost you somewhere in the area of $260 CAD/$200 USD.
Note 2: The AMD equivalent to Quick Sync is called Video Core Next, and while it's fine, it's not as efficient and not as mature a technology. It was only introduced with the first generation Ryzen CPUs and it only got decent with their newest CPUs, we want something cheap.
Alternatively you could forgo having to keep track of what generation of CPU is equipped with Quick Sync cores that feature support for which codecs, and just buy an N100 mini-PC. For around the same price or less of a used workstation you can pick up a mini-PC with an Intel N100 processor. The N100 is a four-core processor based on the 12th gen Alder Lake architecture and comes equipped with the latest revision of the Quick Sync cores. These little processors offer astounding hardware transcoding capabilities for their size and power draw. Otherwise they perform equivalent to an i5-6500, which isn't a terrible CPU. A friend of mine uses an N100 machine as a dedicated retro emulation gaming system and it does everything up to 6th generation consoles just fine. The N100 is also a remarkably efficient chip, it sips power. In fact, the difference between running one of these and an old workstation could work out to hundreds of dollars a year in energy bills depending on where you live.
You can find these Mini-PCs all over Amazon or for a little cheaper on AliExpress. They range in price from $170 CAD/$125 USD for a no name N100 with 8GB RAM to $280 CAD/$200 USD for a Beelink S12 Pro with 16GB RAM. The brand doesn't really matter, they're all coming from the same three factories in Shenzen, go for whichever one fits your budget or has features you want. 8GB RAM should be enough, Linux is lightweight and Plex only calls for 2GB RAM. 16GB RAM might result in a slightly snappier experience, especially with ZFS. A 256GB SSD is more than enough for what we need as a boot drive, but going for a bigger drive might allow you to get away with things like creating preview thumbnails for Plex, but it’s up to you and your budget.
The Mini-PC I wound up buying was a Firebat AK2 Plus with 8GB RAM and a 256GB SSD. It looks like this:
Note: Be forewarned that if you decide to order a Mini-PC from AliExpress, note the type of power adapter it ships with. The mini-PC I bought came with an EU power adapter and I had to supply my own North American power supply. Thankfully this is a minor issue as barrel plug 30W/12V/2.5A power adapters are easy to find and can be had for $10.
Step Two: Choosing Your Storage
Storage is the most important part of our build. It is also the most expensive. Thankfully it’s also the most easily upgrade-able down the line.
For people with a smaller media collection (4TB to 8TB), a more limited budget, or who will only ever have two simultaneous streams running, I would say that the most economical course of action would be to buy a USB 3.0 8TB external HDD. Something like this one from Western Digital or this one from Seagate. One of these external drives will cost you in the area of $200 CAD/$140 USD. Down the line you could add a second external drive or replace it with a multi-drive RAIDz set up such as detailed below.
If a single external drive the path for you, move on to step three.
For people with larger media libraries (12TB+), who prefer media in 4k, or care who about data redundancy, the answer is a RAID array featuring multiple HDDs in an enclosure.
Note: If you are using an old PC or used workstatiom as your server and have the room for at least three 3.5" drives, and as many open SATA ports on your mother board you won't need an enclosure, just install the drives into the case. If your old computer is a laptop or doesn’t have room for more internal drives, then I would suggest an enclosure.
The minimum number of drives needed to run a RAIDz array is three, and seeing as RAIDz is what we will be using, you should be looking for an enclosure with three to five bays. I think that four disks makes for a good compromise for a home server. Regardless of whether you go for a three, four, or five bay enclosure, do be aware that in a RAIDz array the space equivalent of one of the drives will be dedicated to parity at a ratio expressed by the equation 1 − 1/n i.e. in a four bay enclosure equipped with four 12TB drives, if we configured our drives in a RAIDz1 array we would be left with a total of 36TB of usable space (48TB raw size). The reason for why we might sacrifice storage space in such a manner will be explained in the next section.
A four bay enclosure will cost somewhere in the area of $200 CDN/$140 USD. You don't need anything fancy, we don't need anything with hardware RAID controls (RAIDz is done entirely in software) or even USB-C. An enclosure with USB 3.0 will perform perfectly fine. Don’t worry too much about USB speed bottlenecks. A mechanical HDD will be limited by the speed of its mechanism long before before it will be limited by the speed of a USB connection. I've seen decent looking enclosures from TerraMaster, Yottamaster, Mediasonic and Sabrent.
When it comes to selecting the drives, as of this writing, the best value (dollar per gigabyte) are those in the range of 12TB to 20TB. I settled on 12TB drives myself. If 12TB to 20TB drives are out of your budget, go with what you can afford, or look into refurbished drives. I'm not sold on the idea of refurbished drives but many people swear by them.
When shopping for harddrives, search for drives designed specifically for NAS use. Drives designed for NAS use typically have better vibration dampening and are designed to be active 24/7. They will also often make use of CMR (conventional magnetic recording) as opposed to SMR (shingled magnetic recording). This nets them a sizable read/write performance bump over typical desktop drives. Seagate Ironwolf and Toshiba NAS are both well regarded brands when it comes to NAS drives. I would avoid Western Digital Red drives at this time. WD Reds were a go to recommendation up until earlier this year when it was revealed that they feature firmware that will throw up false SMART warnings telling you to replace the drive at the three year mark quite often when there is nothing at all wrong with that drive. It will likely even be good for another six, seven, or more years.
Step Three: Installing Linux
For this step you will need a USB thumbdrive of at least 6GB in capacity, an .ISO of Ubuntu, and a way to make that thumbdrive bootable media.
First download a copy of Ubuntu desktop (for best performance we could download the Server release, but for new Linux users I would recommend against the server release. The server release is strictly command line interface only, and having a GUI is very helpful for most people. Not many people are wholly comfortable doing everything through the command line, I'm certainly not one of them, and I grew up with DOS 6.0. 22.04.3 Jammy Jellyfish is the current Long Term Service release, this is the one to get.
Download the .ISO and then download and install balenaEtcher on your Windows PC. BalenaEtcher is an easy to use program for creating bootable media, you simply insert your thumbdrive, select the .ISO you just downloaded, and it will create a bootable installation media for you.
Once you've made a bootable media and you've got your Mini-PC (or you old PC/used workstation) in front of you, hook it directly into your router with an ethernet cable, and then plug in the HDD enclosure, a monitor, a mouse and a keyboard. Now turn that sucker on and hit whatever key gets you into the BIOS (typically ESC, DEL or F2). If you’re using a Mini-PC check to make sure that the P1 and P2 power limits are set correctly, my N100's P1 limit was set at 10W, a full 20W under the chip's power limit. Also make sure that the RAM is running at the advertised speed. My Mini-PC’s RAM was set at 2333Mhz out of the box when it should have been 3200Mhz. Once you’ve done that, key over to the boot order and place the USB drive first in the boot order. Then save the BIOS settings and restart.
After you restart you’ll be greeted by Ubuntu's installation screen. Installing Ubuntu is really straight forward, select the "minimal" installation option, as we won't need anything on this computer except for a browser (Ubuntu comes preinstalled with Firefox) and Plex Media Server/Jellyfin Media Server. Also remember to delete and reformat that Windows partition! We don't need it.
Step Four: Installing ZFS and Setting Up the RAIDz Array
Note: If you opted for just a single external HDD skip this step and move onto setting up a Samba share.
Once Ubuntu is installed it's time to configure our storage by installing ZFS to build our RAIDz array. ZFS is a "next-gen" file system that is both massively flexible and massively complex. It's capable of snapshot backup, self healing error correction, ZFS pools can be configured with drives operating in a supplemental manner alongside the storage vdev (e.g. fast cache, dedicated secondary intent log, hot swap spares etc.). It's also a file system very amenable to fine tuning. Block and sector size are adjustable to use case and you're afforded the option of different methods of inline compression. If you'd like a very detailed overview and explanation of its various features and tips on tuning a ZFS array check out these articles from Ars Technica. For now we're going to ignore all these features and keep it simple, we're going to pull our drives together into a single vdev running in RAIDz which will be the entirety of our zpool, no fancy cache drive or SLOG.
Open up the terminal and type the following commands:
sudo apt update
then
sudo apt install zfsutils-linux
This will install the ZFS utility. Verify that it's installed with the following command:
zfs --version
Now, it's time to check that the HDDs we have in the enclosure are healthy, running, and recognized. We also want to find out their device IDs and take note of them:
sudo fdisk -1
Note: You might be wondering why some of these commands require "sudo" in front of them while others don't. "Sudo" is short for "super user do”. When and where "sudo" is used has to do with the way permissions are set up in Linux. Only the "root" user has the access level to perform certain tasks in Linux. As a matter of security and safety regular user accounts are kept separate from the "root" user. It's not advised (or even possible) to boot into Linux as "root" with most modern distributions. Instead by using "sudo" our regular user account is temporarily given the power to do otherwise forbidden things. Don't worry about it too much at this stage, but if you want to know more check out this introduction.
If everything is working you should get a list of the various drives detected along with their device IDs which will look like this: /dev/sdc. You can also check the device IDs of the drives by opening the disk utility app. Jot these IDs down as we'll need them for our next step, creating our RAIDz array.
RAIDz is similar to RAID-5 in that instead of striping your data over multiple disks, exchanging redundancy for speed and available space (RAID-0), or mirroring your data writing by two copies of every piece (RAID-1), it instead writes parity blocks across the disks in addition to striping, this provides a balance of speed, redundancy and available space. If a single drive fails, the parity blocks on the working drives can be used to reconstruct the entire array as soon as a replacement drive is added.
Additionally, RAIDz improves over some of the common RAID-5 flaws. It's more resilient and capable of self healing, as it is capable of automatically checking for errors against a checksum. It's more forgiving in this way, and it's likely that you'll be able to detect when a drive is dying well before it fails. A RAIDz array can survive the loss of any one drive.
Note: While RAIDz is indeed resilient, if a second drive fails during the rebuild, you're fucked. Always keep backups of things you can't afford to lose. This tutorial, however, is not about proper data safety.
To create the pool, use the following command:
sudo zpool create "zpoolnamehere" raidz "device IDs of drives we're putting in the pool"
For example, let's creatively name our zpool "mypool". This poil will consist of four drives which have the device IDs: sdb, sdc, sdd, and sde. The resulting command will look like this:
sudo zpool create mypool raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde
If as an example you bought five HDDs and decided you wanted more redundancy dedicating two drive to this purpose, we would modify the command to "raidz2" and the command would look something like the following:
sudo zpool create mypool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
An array configured like this is known as RAIDz2 and is able to survive two disk failures.
Once the zpool has been created, we can check its status with the command:
zpool status
Or more concisely with:
zpool list
The nice thing about ZFS as a file system is that a pool is ready to go immediately after creation. If we were to set up a traditional RAID-5 array using mbam, we'd have to sit through a potentially hours long process of reformatting and partitioning the drives. Instead we're ready to go right out the gates.
The zpool should be automatically mounted to the filesystem after creation, check on that with the following:
df -hT | grep zfs
Note: If your computer ever loses power suddenly, say in event of a power outage, you may have to re-import your pool. In most cases, ZFS will automatically import and mount your pool, but if it doesn’t and you can't see your array, simply open the terminal and type sudo zpool import -a.
By default a zpool is mounted at /"zpoolname". The pool should be under our ownership but let's make sure with the following command:
sudo chown -R "yourlinuxusername" /"zpoolname"
Note: Changing file and folder ownership with "chown" and file and folder permissions with "chmod" are essential commands for much of the admin work in Linux, but we won't be dealing with them extensively in this guide. If you'd like a deeper tutorial and explanation you can check out these two guides: chown and chmod.
You can access the zpool file system through the GUI by opening the file manager (the Ubuntu default file manager is called Nautilus) and clicking on "Other Locations" on the sidebar, then entering the Ubuntu file system and looking for a folder with your pool's name. Bookmark the folder on the sidebar for easy access.
Your storage pool is now ready to go. Assuming that we already have some files on our Windows PC we want to copy to over, we're going to need to install and configure Samba to make the pool accessible in Windows.
Step Five: Setting Up Samba/Sharing
Samba is what's going to let us share the zpool with Windows and allow us to write to it from our Windows machine. First let's install Samba with the following commands:
sudo apt-get update
then
sudo apt-get install samba
Next create a password for Samba.
sudo smbpswd -a "yourlinuxusername"
It will then prompt you to create a password. Just reuse your Ubuntu user password for simplicity's sake.
Note: if you're using just a single external drive replace the zpool location in the following commands with wherever it is your external drive is mounted, for more information see this guide on mounting an external drive in Ubuntu.
After you've created a password we're going to create a shareable folder in our pool with this command
mkdir /"zpoolname"/"foldername"
Now we're going to open the smb.conf file and make that folder shareable. Enter the following command.
sudo nano /etc/samba/smb.conf
This will open the .conf file in nano, the terminal text editor program. Now at the end of smb.conf add the following entry:
["foldername"]
path = /"zpoolname"/"foldername"
available = yes
valid users = "yourlinuxusername"
read only = no
writable = yes
browseable = yes
guest ok = no
Ensure that there are no line breaks between the lines and that there's a space on both sides of the equals sign. Our next step is to allow Samba traffic through the firewall:
sudo ufw allow samba
Finally restart the Samba service:
sudo systemctl restart smbd
At this point we'll be able to access to the pool, browse its contents, and read and write to it from Windows. But there's one more thing left to do, Windows doesn't natively support the ZFS file systems and will read the used/available/total space in the pool incorrectly. Windows will read available space as total drive space, and all used space as null. This leads to Windows only displaying a dwindling amount of "available" space as the drives are filled. We can fix this! Functionally this doesn't actually matter, we can still write and read to and from the disk, it just makes it difficult to tell at a glance the proportion of used/available space, so this is an optional step but one I recommend (this step is also unnecessary if you're just using a single external drive). What we're going to do is write a little shell script in #bash. Open nano with the terminal with the command:
nano
Now insert the following code:
#!/bin/bash CUR_PATH=`pwd` ZFS_CHECK_OUTPUT=$(zfs get type $CUR_PATH 2>&1 > /dev/null) > /dev/null if [[ $ZFS_CHECK_OUTPUT == *not\ a\ ZFS* ]] then IS_ZFS=false else IS_ZFS=true fi if [[ $IS_ZFS = false ]] then df $CUR_PATH | tail -1 | awk '{print $2" "$4}' else USED=$((`zfs get -o value -Hp used $CUR_PATH` / 1024)) > /dev/null AVAIL=$((`zfs get -o value -Hp available $CUR_PATH` / 1024)) > /dev/null TOTAL=$(($USED+$AVAIL)) > /dev/null echo $TOTAL $AVAIL fi
Save the script as "dfree.sh" to /home/"yourlinuxusername" then change the ownership of the file to make it executable with this command:
sudo chmod 774 dfree.sh
Now open smb.conf with sudo again:
sudo nano /etc/samba/smb.conf
Now add this entry to the top of the configuration file to direct Samba to use the results of our script when Windows asks for a reading on the pool's used/available/total drive space:
[global]
dfree command = /home/"yourlinuxusername"/dfree.sh
Save the changes to smb.conf and then restart Samba again with the terminal:
sudo systemctl restart smbd
Now there’s one more thing we need to do to fully set up the Samba share, and that’s to modify a hidden group permission. In the terminal window type the following command:
usermod -a -G sambashare “yourlinuxusername”
Then restart samba again:
sudo systemctl restart smbd
If we don’t do this last step, everything will appear to work fine, and you will even be able to see and map the drive from Windows and even begin transferring files, but you'd soon run into a lot of frustration. As every ten minutes or so a file would fail to transfer and you would get a window announcing “0x8007003B Unexpected Network Error”. This window would require your manual input to continue the transfer with the file next in the queue. And at the end it would reattempt to transfer whichever files failed the first time around. 99% of the time they’ll go through that second try, but this is still all a major pain in the ass. Especially if you’ve got a lot of data to transfer or you want to step away from the computer for a while.
It turns out samba can act a little weirdly with the higher read/write speeds of RAIDz arrays and transfers from Windows, and will intermittently crash and restart itself if this group option isn’t changed. Inputting the above command will prevent you from ever seeing that window.
The last thing we're going to do before switching over to our Windows PC is grab the IP address of our Linux machine. Enter the following command:
hostname -I
This will spit out this computer's IP address on the local network (it will look something like 192.168.0.x), write it down. It might be a good idea once you're done here to go into your router settings and reserving that IP for your Linux system in the DHCP settings. Check the manual for your specific model router on how to access its settings, typically it can be accessed by opening a browser and typing http:\\192.168.0.1 in the address bar, but your router may be different.
Okay we’re done with our Linux computer for now. Get on over to your Windows PC, open File Explorer, right click on Network and click "Map network drive". Select Z: as the drive letter (you don't want to map the network drive to a letter you could conceivably be using for other purposes) and enter the IP of your Linux machine and location of the share like so: \\"LINUXCOMPUTERLOCALIPADDRESSGOESHERE"\"zpoolnamegoeshere"\. Windows will then ask you for your username and password, enter the ones you set earlier in Samba and you're good. If you've done everything right it should look something like this:
You can now start moving media over from Windows to the share folder. It's a good idea to have a hard line running to all machines. Moving files over Wi-Fi is going to be tortuously slow, the only thing that’s going to make the transfer time tolerable (hours instead of days) is a solid wired connection between both machines and your router.
Step Six: Setting Up Remote Desktop Access to Your Server
After the server is up and going, you’ll want to be able to access it remotely from Windows. Barring serious maintenance/updates, this is how you'll access it most of the time. On your Linux system open the terminal and enter:
sudo apt install xrdp
Then:
sudo systemctl enable xrdp
Once it's finished installing, open “Settings” on the sidebar and turn off "automatic login" in the User category. Then log out of your account. Attempting to remotely connect to your Linux computer while you’re logged in will result in a black screen!
Now get back on your Windows PC, open search and look for "RDP". A program called "Remote Desktop Connection" should pop up, open this program as an administrator by right-clicking and selecting “run as an administrator”. You’ll be greeted with a window. In the field marked “Computer” type in the IP address of your Linux computer. Press connect and you'll be greeted with a new window and prompt asking for your username and password. Enter your Ubuntu username and password here.
If everything went right, you’ll be logged into your Linux computer. If the performance is sluggish, adjust the display options. Lowering the resolution and colour depth do a lot to make the interface feel snappier.
Remote access is how we're going to be using our Linux system from now, barring edge cases like needing to get into the BIOS or upgrading to a new version of Ubuntu. Everything else from performing maintenance like a monthly zpool scrub to checking zpool status and updating software can all be done remotely.
This is how my server lives its life now, happily humming and chirping away on the floor next to the couch in a corner of the living room.
Step Seven: Plex Media Server/Jellyfin
Okay we’ve got all the ground work finished and our server is almost up and running. We’ve got Ubuntu up and running, our storage array is primed, we’ve set up remote connections and sharing, and maybe we’ve moved over some of favourite movies and TV shows.
Now we need to decide on the media server software to use which will stream our media to us and organize our library. For most people I’d recommend Plex. It just works 99% of the time. That said, Jellyfin has a lot to recommend it by too, even if it is rougher around the edges. Some people run both simultaneously, it’s not that big of an extra strain. I do recommend doing a little bit of your own research into the features each platform offers, but as a quick run down, consider some of the following points:
Plex is closed source and is funded through PlexPass purchases while Jellyfin is open source and entirely user driven. This means a number of things: for one, Plex requires you to purchase a “PlexPass” (purchased as a one time lifetime fee $159.99 CDN/$120 USD or paid for on a monthly or yearly subscription basis) in order to access to certain features, like hardware transcoding (and we want hardware transcoding) or automated intro/credits detection and skipping, Jellyfin offers some of these features for free through plugins. Plex supports a lot more devices than Jellyfin and updates more frequently. That said, Jellyfin's Android and iOS apps are completely free, while the Plex Android and iOS apps must be activated for a one time cost of $6 CDN/$5 USD. But that $6 fee gets you a mobile app that is much more functional and features a unified UI across platforms, the Plex mobile apps are simply a more polished experience. The Jellyfin apps are a bit of a mess and the iOS and Android versions are very different from each other.
Jellyfin’s actual media player is more fully featured than Plex's, but on the other hand Jellyfin's UI, library customization and automatic media tagging really pale in comparison to Plex. Streaming your music library is free through both Jellyfin and Plex, but Plex offers the PlexAmp app for dedicated music streaming which boasts a number of fantastic features, unfortunately some of those fantastic features require a PlexPass. If your internet is down, Jellyfin can still do local streaming, while Plex can fail to play files unless you've got it set up a certain way. Jellyfin has a slew of neat niche features like support for Comic Book libraries with the .cbz/.cbt file types, but then Plex offers some free ad-supported TV and films, they even have a free channel that plays nothing but Classic Doctor Who.
Ultimately it's up to you, I settled on Plex because although some features are pay-walled, it just works. It's more reliable and easier to use, and a one-time fee is much easier to swallow than a subscription. I had a pretty easy time getting my boomer parents and tech illiterate brother introduced to and using Plex and I don't know if I would've had as easy a time doing that with Jellyfin. I do also need to mention that Jellyfin does take a little extra bit of tinkering to get going in Ubuntu, you’ll have to set up process permissions, so if you're more tolerant to tinkering, Jellyfin might be up your alley and I’ll trust that you can follow their installation and configuration guide. For everyone else, I recommend Plex.
So pick your poison: Plex or Jellyfin.
Note: The easiest way to download and install either of these packages in Ubuntu is through Snap Store.
After you've installed one (or both), opening either app will launch a browser window into the browser version of the app allowing you to set all the options server side.
The process of adding creating media libraries is essentially the same in both Plex and Jellyfin. You create a separate libraries for Television, Movies, and Music and add the folders which contain the respective types of media to their respective libraries. The only difficult or time consuming aspect is ensuring that your files and folders follow the appropriate naming conventions:
Plex naming guide for Movies
Plex naming guide for Television
Jellyfin follows the same naming rules but I find their media scanner to be a lot less accurate and forgiving than Plex. Once you've selected the folders to be scanned the service will scan your files, tagging everything and adding metadata. Although I find do find Plex more accurate, it can still erroneously tag some things and you might have to manually clean up some tags in a large library. (When I initially created my library it tagged the 1963-1989 Doctor Who as some Korean soap opera and I needed to manually select the correct match after which everything was tagged normally.) It can also be a bit testy with anime (especially OVAs) be sure to check TVDB to ensure that you have your files and folders structured and named correctly. If something is not showing up at all, double check the name.
Once that's done, organizing and customizing your library is easy. You can set up collections, grouping items together to fit a theme or collect together all the entries in a franchise. You can make playlists, and add custom artwork to entries. It's fun setting up collections with posters to match, there are even several websites dedicated to help you do this like PosterDB. As an example, below are two collections in my library, one collecting all the entries in a franchise, the other follows a theme.
My Star Trek collection, featuring all eleven television series, and thirteen films.
My Best of the Worst collection, featuring sixty-nine films previously showcased on RedLetterMedia’s Best of the Worst. They’re all absolutely terrible and I love them.
As for settings, ensure you've got Remote Access going, it should work automatically and be sure to set your upload speed after running a speed test. In the library settings set the database cache to 2000MB to ensure a snappier and more responsive browsing experience, and then check that playback quality is set to original/maximum. If you’re severely bandwidth limited on your upload and have remote users, you might want to limit the remote stream bitrate to something more reasonable, just as a note of comparison Netflix’s 1080p bitrate is approximately 5Mbps, although almost anyone watching through a chromium based browser is streaming at 720p and 3mbps. Other than that you should be good to go. For actually playing your files, there's a Plex app for just about every platform imaginable. I mostly watch television and films on my laptop using the Windows Plex app, but I also use the Android app which can broadcast to the chromecast connected to the TV in the office and the Android TV app for our smart TV. Both are fully functional and easy to navigate, and I can also attest to the OS X version being equally functional.
Part Eight: Finding Media
Now, this is not really a piracy tutorial, there are plenty of those out there. But if you’re unaware, BitTorrent is free and pretty easy to use, just pick a client (qBittorrent is the best) and go find some public trackers to peruse. Just know now that all the best trackers are private and invite only, and that they can be exceptionally difficult to get into. I’m already on a few, and even then, some of the best ones are wholly out of my reach.
If you decide to take the left hand path and turn to Usenet you’ll have to pay. First you’ll need to sign up with a provider like Newshosting or EasyNews for access to Usenet itself, and then to actually find anything you’re going to need to sign up with an indexer like NZBGeek or NZBFinder. There are dozens of indexers, and many people cross post between them, but for more obscure media it’s worth checking multiple. You’ll also need a binary downloader like SABnzbd. That caveat aside, Usenet is faster, bigger, older, less traceable than BitTorrent, and altogether slicker. I honestly prefer it, and I'm kicking myself for taking this long to start using it because I was scared off by the price. I’ve found so many things on Usenet that I had sought in vain elsewhere for years, like a 2010 Italian film about a massacre perpetrated by the SS that played the festival circuit but never received a home media release; some absolute hero uploaded a rip of a festival screener DVD to Usenet. Anyway, figure out the rest of this shit on your own and remember to use protection, get yourself behind a VPN, use a SOCKS5 proxy with your BitTorrent client, etc.
On the legal side of things, if you’re around my age, you (or your family) probably have a big pile of DVDs and Blu-Rays sitting around unwatched and half forgotten. Why not do a bit of amateur media preservation, rip them and upload them to your server for easier access? (Your tools for this are going to be Handbrake to do the ripping and AnyDVD to break any encryption.) I went to the trouble of ripping all my SCTV DVDs (five box sets worth) because none of it is on streaming nor could it be found on any pirate source I tried. I’m glad I did, forty years on it’s still one of the funniest shows to ever be on TV.
Part Nine/Epilogue: Sonarr/Radarr/Lidarr and Overseerr
There are a lot of ways to automate your server for better functionality or to add features you and other users might find useful. Sonarr, Radarr, and Lidarr are a part of a suite of “Servarr” services (there’s also Readarr for books and Whisparr for adult content) that allow you to automate the collection of new episodes of TV shows (Sonarr), new movie releases (Radarr) and music releases (Lidarr). They hook in to your BitTorrent client or Usenet binary newsgroup downloader and crawl your preferred Torrent trackers and Usenet indexers, alerting you to new releases and automatically grabbing them. You can also use these services to manually search for new media, and even replace/upgrade your existing media with better quality uploads. They’re really a little tricky to set up on a bare metal Ubuntu install (ideally you should be running them in Docker Containers), and I won’t be providing a step by step on installing and running them, I’m simply making you aware of their existence.
The other bit of kit I want to make you aware of is Overseerr which is a program that scans your Plex media library and will serve recommendations based on what you like. It also allows you and your users to request specific media. It can even be integrated with Sonarr/Radarr/Lidarr so that fulfilling those requests is fully automated.
And you're done. It really wasn't all that hard. Enjoy your media. Enjoy the control you have over that media. And be safe in the knowledge that no hedgefund CEO motherfucker who hates the movies but who is somehow in control of a major studio will be able to disappear anything in your library as a tax write-off.
1K notes
·
View notes
Note
Tell us another story from your security guard days
You learned as a patroller (or any other position that actually worked in the location we were guarding) that the alarms never worked properly. However, the people in Security Control, who were supposed to remotely manage the alarms, believed wholeheartedly and blindly in their infallibility.
One day, while I was operating a metal detector out of the data center floor, a trained monkey (my semi-fond semi-condescending nickname for Google employees) came out of the data center and said, "There's a fire in there."
"Okay," I said. There was a patroller passing by at the moment, and I waved him over and asked him to go onto the DC floor to take care of the fire.
He looked at me like I was crazy and said, "What am I supposed to do about it?"
"Find out where it is?" I said. (The trained monkey did not know, and had apparently seen it some fifteen minutes prior and chosen to finish up whatever he was doing before going and alerting anyone else about it.) "Report it to the supervisor and ask him to call the fire department? Put it out, if it's small enough?"
"I don't know how to use a fire extinguisher," he said, which was an interesting thing for him to say considering every security guard there had mandatory annual fire extinguisher and first aid training.
"Okay," I said. "Then take over my post."
"I'm about to go to lunch," he said.
I picked up my radio and reported to the supervisor that he had taken over my post. (If the post was later found unattended it would be on him, at that point.) Then I went onto the DC floor.
The fire was not big, but it was loud, so I was able to find it without much trouble. One of the servers had caught on fire somehow. I called Security Control and told them there was a fire and where it was.
"No there isn't," Security Control said.
"Yes there is," I said.
"There are no alarms going off," Security Control said.
"There sure aren't," I said. "But I'm looking at a fire right now."
"There can't be a fire," Security Control said. "There's no alarm."
I put my radio back on my belt, deciding I had better deal with the problem before it got bigger, and quickly put the fire out (using my mandatory annual fire extinguisher training). Then I radioed my supervisor and reported the fire. Fortunately he was a cool guy who believed me and had my back about it and I got to listen to him chew out Security Control for the rest of the day about the alarm system and the seriousness of fires and the proper protocol for dealing with fires.
That's why you always always always back up anything you have stored on Google Drive or any of their other online storage services. Or like. Just don't use them at all.
302 notes
·
View notes
Text
She Won. They Didn't Just Change the Machines. They Rewired the Election. How Leonard Leo's 2021 sale of an electronics firm enabled tech giants to subvert the 2024 election.

Everyone knows how the Republicans interfered in the 2024 US elections through voter interference and voter-roll manipulation, which in itself could have changed the outcomes of the elections. What's coming to light now reveals that indeed those occupying the White House, at least, are not those who won the election.
Here's how they did it.
(full story is replicated here below the read-more: X)
She Won
The missing votes uncovered in Smart Elections’ legal case in Rockland County, New York, are just the tip of the iceberg—an iceberg that extends across the swing states and into Texas.
On Monday, an investigator’s story finally hit the news cycle: Pro V&V, one of only two federally accredited testing labs, approved sweeping last-minute updates to ES&S voting machines in the months leading up to the 2024 election—without independent testing, public disclosure, or full certification review.
These changes were labeled “de minimis”—a term meant for trivial tweaks. But they touched ballot scanners, altered reporting software, and modified audit files—yet were all rubber-stamped with no oversight.
That revelation is a shock to the public.
But for those who’ve been digging into the bizarre election data since November, this isn’t the headline—it’s the final piece to the puzzle. While Pro V&V was quietly updating equipment in plain sight, a parallel operation was unfolding behind the curtain—between tech giants and Donald Trump.
And it started with a long forgotten sale.
A Power Cord Becomes a Backdoor
In March 2021, Leonard Leo—the judicial kingmaker behind the modern conservative legal machine—sold a quiet Chicago company by the name of Tripp Lite for $1.65 billion. The buyer: Eaton Corporation, a global power infrastructure conglomerate that just happened to have a partnership with Peter Thiel’s Palantir.
To most, Tripp Lite was just a hardware brand—battery backups, surge protectors, power strips. But in America’s elections, Tripp Lite devices were something else entirely.
They are physically connected to ES&S central tabulators and Electionware servers, and Dominion tabulators and central servers across the country. And they aren’t dumb devices. They are smart UPS units��programmable, updatable, and capable of communicating directly with the election system via USB, serial port, or Ethernet.
ES&S systems, including central tabulators and Electionware servers, rely on Tripp Lite UPS devices. ES&S’s Electionware suite runs on Windows OS, which automatically trusts connected UPS hardware.
If Eaton pushed an update to those UPS units, it could have gained root-level access to the host tabulation environment—without ever modifying certified election software.
In Dominion’s Democracy Suite 5.17, the drivers for these UPS units are listed as “optional”—meaning they can be updated remotely without triggering certification requirements or oversight. Optional means unregulated. Unregulated means invisible. And invisible means perfect for infiltration.
Enter the ballot scrubbing platform BallotProof. Co-created by Ethan Shaotran, a longtime employee of Elon Musk and current DOGE employee, BallotProof was pitched as a transparency solution—an app to “verify” scanned ballot images and support election integrity.
With Palantir's AI controlling the backend, and BallotProof cleaning the front, only one thing was missing: the signal to go live.
September 2024: Eaton and Musk Make It Official
Then came the final public breadcrumb:In September 2024, Eaton formally partnered with Elon Musk.
The stated purpose? A vague, forward-looking collaboration focused on “grid resilience” and “next-generation communications.”
But buried in the partnership documents was this line:
“Exploring integration with Starlink's emerging low-orbit DTC infrastructure for secure operational continuity.”
The Activation: Starlink Goes Direct-to-Cell
That signal came on October 30, 2024—just days before the election, Musk activated 265 brand new low Earth orbit (LEO) V2 Mini satellites, each equipped with Direct-to-Cell (DTC) technology capable of processing, routing, and manipulating real-time data, including voting data, through his satellite network.
DTC doesn’t require routers, towers, or a traditional SIM. It connects directly from satellite to any compatible device—including embedded modems in “air-gapped” voting systems, smart UPS units, or unsecured auxiliary hardware.
From that moment on:
Commands could be sent from orbit
Patch delivery became invisible to domestic monitors
Compromised devices could be triggered remotely
This groundbreaking project that should have taken two-plus years to build, was completed in just under ten months.
Elon Musk boasts endlessly about everything he’s launching, building, buying—or even just thinking about—whether it’s real or not. But he pulls off one of the largest and fastest technological feats in modern day history… and says nothing? One might think that was kind of… “weird.”
According to New York Times reporting, on October 5—just before Starlink’s DTC activation—Musk texted a confidant:
“I’m feeling more optimistic after tonight. Tomorrow we unleash the anomaly in the matrix.”
Then, an hour later:
“This isn’t something on the chessboard, so they’ll be quite surprised. ‘Lasers’ from space.”
It read like a riddle. In hindsight, it was a blueprint.
The Outcome
Data that makes no statistical sense. A clean sweep in all seven swing states.
The fall of the Blue Wall. Eighty-eight counties flipped red—not one flipped blue.
Every victory landed just under the threshold that would trigger an automatic recount. Donald Trump outperformed expectations in down-ballot races with margins never before seen—while Kamala Harris simultaneously underperformed in those exact same areas.
If one were to accept these results at face value—Donald Trump, a 34-count convicted felon, supposedly outperformed Ronald Reagan. According to the co-founder of the Election Truth Alliance:
“These anomalies didn’t happen nationwide. They didn’t even happen across all voting methods—this just doesn’t reflect human voting behavior.”
They were concentrated.
Targeted.
Specific to swing states and Texas—and specific to Election Day voting.
And the supposed explanation? “Her policies were unpopular.” Let’s think this through logically. We’re supposed to believe that in all the battleground states, Democratic voters were so disillusioned by Vice President Harris’s platform that they voted blue down ballot—but flipped to Trump at the top of the ticket?
Not in early voting.
Not by mail.
With exception to Nevada, only on Election Day.
And only after a certain threshold of ballots had been cast—where VP Harris’s numbers begin to diverge from her own party, and Trump’s suddenly begin to surge. As President Biden would say, “C’mon, man.”
In the world of election data analysis, there’s a term for that: vote-flipping algorithm.
And of course, Donald Trump himself:
He spent a year telling his followers he didn’t need their votes—at one point stating,
“…in four years, you don't have to vote again. We'll have it fixed so good, you're not gonna have to vote.”
____
They almost got away with the coup. The fact that they still occupy the White House and control most of the US government will make removing them and replacing them with the rightful President Harris a very difficult task.
But for this nation to survive, and for the world to not fall further into chaos due to this "administration," we must rid ourselves of the pretender and his minions and controllers once and for all.
30 notes
·
View notes
Text
Heads up folks, NicoNicoDouga is currently down due to a large scale cyberattack
The attack happened on the 8th and the site is still down in terms of video streaming. Apparently there were reports of Ransomware being used during the attack.
The site is still “down” but the blog part is back up but from the report, videos and content posted are ok so do not fret. The site is still down as of this post (save for the blog) and it seems they are working their hardest to fix it and do damage control.
Here is a rough translation of their most recent post:
Report and apology regarding cyberattack on our services
As announced in Niconico Info dated June 8th, 2024, Dwango Co., Ltd. (Headquarters: Chuo-ku, Tokyo; President and CEO: Takeshi Natsuno) has been unable to use the entire Niconico service operated by our company since the early morning of June 8th. It has been confirmed that this outage was caused by a large-scale cyberattack, including ransomware, and we are currently temporarily suspending use of the service and conducting an investigation and response to fully grasp the extent of the damage and restore it.
After confirming the cyberattack, we immediately took emergency measures such as shutting down the relevant servers, and have set up a task force to fully clarify the damage, determine the cause, and restore the system. We would like to report the findings of the investigation to date and future responses as follows.
We sincerely apologize to our users and related parties for the great inconvenience and concern caused.
Response history>
Around 3:30 a.m. on June 8, a malfunction occurred that prevented all of our web services, including our "Nico Nico" and "N Preparatory School" services, from working properly. After an investigation, it was confirmed that the malfunction was caused by a cyber attack, including ransomware, at around 8 a.m. on the same day. A task force was set up on the same day, and in order to prevent the damage from spreading, we immediately cut off communication between servers in the data center provided by our group companies and shut down the servers, temporarily suspending the provision of our web services. In addition, since it was discovered that the attack had also extended to our internal network, we suspended the use of some of our internal business systems and prohibited access to the internal network.
As of June 14, we are currently investigating the extent of the damage and formulating recovery procedures, aiming for a gradual recovery.
June 8, 2024
We have begun an investigation into the malfunction that prevented all of our "Nico Nico" services from working properly and the failure of some of our internal systems.
We have confirmed that the cause of the failure was encryption by ransomware. "Nico Nico" services in general and some internal business systems suspended and servers were shut down
A task force was established
First report "Regarding the situation in which Nico Nico services are unavailable" was announced
June 9, 2024
Contacted the police and consulted with external specialist agencies
Kabukiza office was closed
KADOKAWA announced "Regarding the occurrence of failures on multiple KADOKAWA Group websites"
June 10, 2024
Reported to the Personal Information Protection Commission (first report)
Second report "Regarding the situation in which Nico Nico services are unavailable" was announced
June 12, 2024
Reported the occurrence of the failure to the Kanto Regional Financial Bureau (Financial Services Agency)
June 14, 2024
This announcement
This cyber attack by a third party was repeated even after it was discovered, and even after a server in the private cloud was shut down remotely, the third party was observed to be remotely starting the server and spreading the infection. Therefore, the power cables and communication cables of the servers were physically disconnected and blocked. As a result, all servers installed in the data centers provided by the group companies became unusable. In addition, to prevent further spread of infection, our employees are prohibited from coming to the Kabukiza office in principle, and our internal network and internal business systems have also been shut down.
In addition to public cloud services, Niconico uses private cloud services built in data centers provided by KADOKAWA Group companies, to which our company belongs. One of these, a data center of a group company, was hit by a cyber attack, including ransomware, and a significant number of virtual machines were encrypted and became unavailable. As a result, the systems of all of our web services, including Niconico, were shut down.
This cyber attack by a third party was repeated even after it was discovered, and even after a server in the private cloud was shut down remotely, the third party was observed to be remotely starting the server and spreading the infection. Therefore, the power cables and communication cables of the servers were physically disconnected and blocked. As a result, all servers installed in the data centers provided by the group companies became unusable. In addition, to prevent further spread of infection, our employees are prohibited from coming to the Kabukiza office in principle, and our internal network and internal business systems have also been shut down.
The Niconico Video system, posted video data, and video distribution system were operated on the public cloud, so they were not affected. Niconico Live Broadcasting did not suffer any damage as the system itself was run on a public cloud, but the system that controls Niconico Live Broadcasting's video distribution is run on a private cloud of a group company, so it is possible that past time-shifted footage, etc. may not be available. We are also gradually checking the status of systems other than Niconico Douga and Niconico Live Broadcasting.
■ Services currently suspended
Niconico Family services such as Niconico Video, Niconico Live Broadcast, and Niconico Channel
Niconico account login on external services
Music monetization services
Dwango Ticket
Some functions of Dwango JP Store
N Preparatory School *Restored for students of N High School and S High School
Sending gifts for various projects
■ About Niconico-related programs
Until the end of July, official Niconico live broadcasts and channel live broadcasts using Niconico Live Broadcast and Niconico Channel will be suspended.
Considering that program production requires a preparation period and that Niconico Live Broadcast and Niconico Channel are monthly subscription services, we have decided to suspend live broadcasts on Niconico Live Broadcast until the end of July. Depending on the program, the broadcast may be postponed or broadcast on other services.
The date of resumption of Niconico services, including Niconico Live Broadcast and Niconico Channel, is currently undecided.
Niconico Channel Plus allows viewing of free content without logging in. Paid content viewing and commenting are not available.
■ About the new version "Nico Nico Douga (Re: Kari)" (read: nikoniko douga rikari)
While "Nico Nico" is suspended, as the first step, we will release a new version of "Nico Nico Douga (Re: Kari)" at 3:00 p.m. on June 14, 2024. Our development team voluntarily created this site in just three days, and it is a video community site with only basic functions such as video viewing and commenting, just like the early days of Niconico (2006). In consideration of the load on the service, only a selected portion of the videos posted on Niconico Video is available for viewing. The lineup is mainly popular videos from 2007, and you can watch them for free without an account.
■About the Niconico Manga app
We have already confirmed that many systems were not affected, and we are considering resuming the service with a reduced-function version that allows basic functions such as reading manga, commenting, and adding to favorites. We aim to restore the service by June 2024.
If any new facts become known in the future, we will report them on Niconico Info, Official X, our company website, etc. as they become available. We appreciate your understanding and cooperation.
Added 6/10]
Thank you for your continued patronage. This is the Niconico management team.
Due to the effects of a large-scale cyber attack, Niconico has been unavailable since the early morning of June 8th.
We sincerely apologize for the inconvenience.
As of 6:00 p.m. on June 10th, we are working to rebuild the entire Niconico system without being affected by the cyber attack, in parallel with an investigation to grasp the full extent of the damage.
We have received many inquiries from you, such as "Will premium membership fees and paid channel membership fees be charged during the service suspension period?" and "What will happen to the time shift deadline for live broadcasts?". We are currently in the process of investigating the impact, so we cannot answer your questions, but we will respond sincerely, so please wait for further information.
Our executive officer Shigetaka Kurita and CTO Keiichi Suzuki are scheduled to explain the expected time until recovery and the information learned from the investigation up to that point this week.
We will inform you again about this as soon as we are ready.
■ Services currently suspended
Niconico Family Services such as Niconico Video, Niconico Live Broadcast, Niconico Channel, etc.
Niconico Account Login on External Services
[Added 2024/06/10 18:00]
Gifts for various projects (due to the suspension of related systems)
■ Programs scheduled to be canceled/postponed (as of June 10)
Programs from June 10 to June 16
■ Current situation
In parallel with the recovery work, we are investigating the route of the attack and the possibility of information leakage.
No credit card information has been leaked (Niconico does not store credit card information on its own servers).
The official program "Monthly Niconico Info" scheduled for June 11 at 20:00 will be broadcast on YouTube and X at a reduced scale. During this program, we will verbally explain the current situation in an easy-to-understand manner. (※There is no prospect of providing additional information, such as detailed recovery dates, during this program.)
"Monthly Niconico Info" can be viewed at the following URL. YouTube → https://www.youtube.com/@niconico_news X (formerly Twitter) → https://x.com/nico_nico_info
The latest information will be posted on Niconico Info and the official X (formerly Twitter).
We deeply apologize for the inconvenience caused to users and content providers who regularly enjoy our videos and live broadcasts. We ask for your understanding and cooperation until the issue is resolved.
Published on 6/8]
Thank you for your continued patronage. This is the Niconico management team.
Currently, Niconico is under a large-scale cyber attack, and in order to minimize the impact, we have temporarily suspended our services.
We are accelerating our investigation and taking measures, but we cannot begin recovery until we are confident that we have completely eliminated the effects of the cyber attack and our safety has been confirmed. We do not expect to be able to restore services at least this weekend.
We sincerely apologize for the inconvenience.
We will inform you of the latest situation again on Monday (June 10, 2024).
■ Suspended services
Niconico family services such as Niconico Video, Niconico Live Broadcast, and Niconico Channel
Niconico account login on external services
■ Current situation
In parallel with the recovery work, we are investigating the route of the attack and the possibility of information leakage.
No credit card information has been confirmed to have been leaked (Niconico does not store credit card information on its own servers).
Future information will be announced on Niconico Info and Official X (formerly Twitter) as it becomes available.
We deeply apologize to all users who were looking forward to the video posts and live broadcasts scheduled for this weekend. We ask for your understanding and cooperation until the response is complete.
#news#internet#translation#nico nico douga#cyber attack#cyber security#hatsune miku#niconico#japan#please spread#please reblog this
101 notes
·
View notes
Text
Class Feature Friday: Hacker Specialization (Operative Specialization)

(art by gtasoul on DeviantArt)
If there’s anything unique to science fiction, it’s hacking. After all, traditional fantasy rarely has computers (and when they do, they’re usually the ancient, barely understood kind), leading to an entirely different avenue of heroic action as the characters crack open cybersecurity measures, often stylized with virtual avatars and the like.
Now, with their tech savvy, you probably expect the average hacker to be a mechanic or technomancer, and that’s fair, they certainly have the specialization. However, brilliant engineers and techno-mages are hardly the only archetypical hacker characters. Sometimes someone is only focused on computers and not engineering. Others might be agents with a variety of skills that just happens to include cybersecurity as a specialization.
After all, it’s one thing to remotely hack someone from across the cybercafe using the unsecured wifi, and an entirely different beast to sneak into a secure facility and crack open a server with no wireless connection.
And so we have the hacker specialization for operatives. Bank details, classified documents, incriminating messenger logs, the targeting software for the rocket turrets shelling the party… If its on a computer, it isn’t safe from them. What’s more, as operatives, they have the skills to get in close enough to do their hacking without being detected (hopefully). So let’s see what makes them special!
Naturally, these operatives are very familiar with computers and engineering, and they can use their computer skills as part of their trick attacks, sending distracting alerts to enemy headsets, causing malfunctions in nearby devices, or even simply projecting a distracting hologram from their own device. Obviously, however, they have to actually have a computer device on hand to do so. One of the rare occasions where an operative may be forced to use a different skill with their trick attacks.
As expert hackers, these operatives learn how to be especially cautious in their approach, reducing the chance of triggering countermeasures if they accidentally push to hard.
More skilled hackers can take control of devices they have hacked, potentially using their functions for their own benefit a few times before they return to normal or are deactivated.
While not as adept at hacking as other classes, hacker operatives can do a lot of fun things with it, particularly once they gain control of a system with their mid-level ability. Imagine activating the security systems to target the guards, or starting up machinery that proves distracting or hazardous, and so on. I recommend pairing your hacking skills with stealth or another sneaky skillset to make the most of it.
There are a lot of ways to play a hacker. They might be terminally online, or they might tap into vibes of the 80’s idea of a hacker as a cool trendy figure with fancy computer knowhow. Or they may be more professional about it. Certainly hacking has a long association with disrespecting authority, with all the character traits associated with that.
The hacker known only by the username LuckySTR!KE is a notorious thorn in the side of may corporations, earning them a bounty for their capture or death. However, the crafty worlanisi tends to stay a step ahead. However, the contents of their latest datamining has them nervous, and they’re willing to pay for bodyguards.
A passionate hacktivist and self-proclaimed protector of the ecosystems of the galaxy. Beshara has developed a knack for sneaking into corporate facilities and sabotaging their efforts to study and exploit wildlife. However, when one such outing ends up with several researchers being scarred and killed by the acidic saliva of flying kriegakos, she begins to wonder if she has become too extreme.
An expedition to alien ruins may not seem like it needs a computer expert, but when it is discovered the stone buildings are interlaced with intricate technomagical circuitry, one is called in. Unfortunately, the team doesn’t realize this hacker is a corporate spy feeding information back to a rival conglomerate and seeking to steal the most precious treasures in the name of their employer. Unfortunately, her acts of espionage and greed end up awakening the mummified guardians of the ruin.
9 notes
·
View notes
Text
Engineering Development Group-CIA malware and hacking tools are built by EDG (Engineering Development Group), a software development group within CCI (Center for Cyber Intelligence), a department belonging to the CIA's DDI (Directorate for Digital Innovation). The DDI is one of the five major directorates of the CIA. The EDG is responsible for the development, testing and operational support of all backdoors, exploits, malicious payloads, trojans, viruses and any other kind of malware used by the CIA in its covert operations world-wide. The increasing sophistication of surveillance techniques has drawn comparisons with George Orwell's 1984, but "Weeping Angel", developed by the CIA's Embedded Devices Branch (EDB), which infests smart TVs, transforming them into covert microphones, is surely its most emblematic realization. The attack against Samsung smart TVs was developed in cooperation with the United Kingdom's MI5/BTSS. After infestation, Weeping Angel places the target TV in a 'Fake-Off' mode, so that the owner falsely believes the TV is off when it is on. In 'Fake-Off' mode the TV operates as a bug, recording conversations in the room and sending them over the Internet to a covert CIA server. As of October 2014 the CIA was also looking at infecting the vehicle control systems used by modern cars and trucks. The purpose of such control is not specified, but it would permit the CIA to engage in nearly undetectable assassinations. The CIA's Mobile Devices Branch (MDB) developed numerous attacks to remotely hack and control popular smartphones. Infected phones can be instructed to send the CIA the user's geolocation, audio and text communications as well as covertly activate the phone's camera and microphone. These techniques permit the CIA to bypass the encryption of WhatsApp, Signal, Telegram, Wiebo, Confide and Cloackman by hacking the "smart" phones that they run on and collecting audio and message traffic before encryption is applied.
2 notes
·
View notes
Text
Bitcoin’s Energy Usage: The Most Misunderstood Innovation in Human History

They say Bitcoin is boiling the oceans. That it’s an environmental villain. That its energy use is unjustifiable.
But what if the real crime isn't the energy Bitcoin uses, but the narrative built to demonize it? What if Bitcoin isn’t the problem... but the blueprint for the solution?
Let’s talk truth. Let’s rip apart the lazy headlines and go deeper. Because beneath the noise is a revolution most people still don’t understand.
Bitcoin uses energy. So does everything that matters.
The media loves to compare Bitcoin to Visa or PayPal, painting it as inefficient or unsustainable. But that’s like comparing a flashlight to the sun. Visa runs on the rails of a trusted, centralized system. Bitcoin is the rail. It’s the whole damn thing—a self-contained, decentralized monetary system that operates without permission, politics, or backroom deals.
Its energy use isn’t a bug. It’s the bedrock. Proof-of-Work ties digital value to physical reality. It makes Bitcoin incorruptible. You can’t fake a Bitcoin. You can’t conjure it with a keystroke. You earn it by anchoring to the laws of thermodynamics. It’s not "magic internet money" – it’s physics-backed truth in a world of fiat fiction.
Meanwhile, the traditional financial system gets a free pass. Nobody counts the fuel burned by fleets of armored trucks hauling cash. Or the skyscrapers lit 24/7. Or the servers running endless transactions across thousands of banks, hedge funds, and central banks. No one questions the carbon footprint of the military-industrial complex that keeps the petrodollar on life support.
Bitcoin replaces all that bloat with software. With math. With consensus instead of coercion. It doesn’t require tanks to back it up. It doesn’t need to spy on you to enforce rules. It just runs. Borderless. Permissionless. Unstoppable.
But here’s where things get interesting.
Bitcoin mining isn’t just not bad for the environment. It could be the greatest tool we’ve ever had for energy innovation.
Across the globe, Bitcoin miners are setting up shop where energy is cheap, stranded, or wasted. Remote hydro in the mountains. Natural gas flares in oil fields. Oversupplied wind farms with nowhere to send excess power. Miners turn this lost energy into economic value. They act as a buyer of last resort—a pressure release valve for unstable grids and a reason to build more renewables.
This isn’t hypothetical. It’s happening right now. In Texas, Bitcoin miners are helping stabilize the grid. In parts of Africa, they're jumpstarting economic activity by creating demand where there was none. This is not an energy hog. This is a global infrastructure upgrade wrapped in code.
So why the backlash?
Because Bitcoin exposes the rot. It shines a light on the inefficiency, the fragility, and the waste embedded in the old system. It asks uncomfortable questions. It refuses to play by the rules of fiat gatekeepers. And that scares people.
It forces us to confront the truth: that energy isn’t the problem. Corruption is. Misaligned incentives are. And Bitcoin is the first monetary network in human history that rewards transparency, efficiency, and truth.
We’re witnessing the dawn of a new era—one where money is no longer a tool for control, but a tool for freedom. One where energy isn’t rationed by bureaucracy, but unleashed by innovation.
Bitcoin’s energy use isn’t a moral failing. It’s the cost of freedom. The cost of opting out. The cost of building something better.
We’ve misunderstood the most important innovation of our time.
But the block clock keeps ticking. And history has a way of proving the truth.
Tick tock. Next block.
Take Action Towards Financial Independence
If this article has sparked your interest in the transformative potential of Bitcoin, there’s so much more to explore! Dive deeper into the world of financial independence and revolutionize your understanding of money by following my blog and subscribing to my YouTube channel.
🌐 Blog: Unplugged Financial Blog Stay updated with insightful articles, detailed analyses, and practical advice on navigating the evolving financial landscape. Learn about the history of money, the flaws in our current financial systems, and how Bitcoin can offer a path to a more secure and independent financial future.
📺 YouTube Channel: Unplugged Financial Subscribe to our YouTube channel for engaging video content that breaks down complex financial topics into easy-to-understand segments. From in-depth discussions on monetary policies to the latest trends in cryptocurrency, our videos will equip you with the knowledge you need to make informed financial decisions.
👍 Like, subscribe, and hit the notification bell to stay updated with our latest content. Whether you’re a seasoned investor, a curious newcomer, or someone concerned about the future of your financial health, our community is here to support you on your journey to financial independence.
📚 Get the Book: The Day The Earth Stood Still 2.0 For those who want to take an even deeper dive, my book offers a transformative look at the financial revolution we’re living through. The Day The Earth Stood Still 2.0 explores the philosophy, history, and future of money, all while challenging the status quo and inspiring action toward true financial independence.
Support the Cause
If you enjoyed what you read and believe in the mission of spreading awareness about Bitcoin, I would greatly appreciate your support. Every little bit helps keep the content going and allows me to continue educating others about the future of finance.
Donate Bitcoin:
bc1qpn98s4gtlvy686jne0sr8ccvfaxz646kk2tl8lu38zz4dvyyvflqgddylk
#Bitcoin#BitcoinEnergy#ProofOfWork#SoundMoney#FixTheMoneyFixTheWorld#Decentralization#DigitalGold#BitcoinIsHope#CryptoRevolution#FinancialFreedom#EnergyInnovation#BitcoinMining#EndTheFUD#MonetaryRevolution#UnpluggedFinancial#TickTockNextBlock#BitcoinFixesThis#SustainableFuture#EconomicTruth#EnergyFUD#financial empowerment#blockchain#finance#globaleconomy#digitalcurrency#financial education#financial experts#unplugged financial#cryptocurrency
5 notes
·
View notes
Text
Breaking into Tech: How Linux Skills Can Launch Your Career in 2025
In today's rapidly evolving tech landscape, Linux skills have become increasingly valuable for professionals looking to transition into rewarding IT careers. As we move through 2025, the demand for Linux System Administrators continues to grow across industries, creating excellent opportunities for career changers—even those without traditional technical backgrounds.
Why Linux Skills Are in High Demand
Linux powers much of the world's technology infrastructure. From enterprise servers to cloud computing environments, this open-source operating system has become the backbone of modern IT operations. Organizations need skilled professionals who can:
Deploy and manage enterprise-level IT infrastructure
Ensure system security and stability
Troubleshoot complex technical issues
Implement automation to improve efficiency
The beauty of Linux as a career path is that it's accessible to motivated individuals willing to invest time in learning the necessary skills. Unlike some tech specialties that require years of formal education, Linux administration can be mastered through focused training programs and hands-on experience.
The Path to Becoming a Linux System Administrator
1. Structured Learning
The journey begins with structured learning. Comprehensive training programs that cover Linux fundamentals, system administration, networking, and security provide the knowledge base needed to succeed. The most effective programs:
Teach practical, job-relevant skills
Offer instruction from industry professionals
Pace the learning to allow for deep understanding
Prepare students for respected certifications like Red Hat
2. Certification
Industry certifications validate your skills to potential employers. Red Hat certifications are particularly valuable, demonstrating your ability to work with enterprise Linux environments. These credentials help you stand out in a competitive job market and often lead to higher starting salaries.
3. Hands-On Experience
Theoretical knowledge isn't enough—employers want to see practical experience. Apprenticeship opportunities allow aspiring Linux administrators to:
Apply their skills in real-world scenarios
Build a portfolio of completed projects
Gain confidence in their abilities
Bridge the gap between training and employment
4. Job Search Strategy
With the right skills and experience, the final step is finding that first position. Successful job seekers:
Tailor their resumes to highlight relevant skills
Prepare thoroughly for technical interviews
Network with industry professionals
Target companies that value their newly acquired skills
Time Investment and Commitment
Becoming job-ready as a Linux System Administrator typically requires:
10-15+ hours per week for studying
A commitment to consistent learning over several months
Persistence through challenging technical concepts
A growth mindset and motivation to succeed
The Career Outlook
For those willing to make the investment, the rewards can be substantial. Linux professionals enjoy:
Competitive salaries
Strong job security
Opportunities for remote work
Clear paths for career advancement
Intellectually stimulating work environments
Conclusion
The path to becoming a Linux System Administrator is more accessible than many people realize. With the right training, certification, and hands-on experience, motivated individuals can transition into rewarding tech careers—regardless of their previous background. As we continue through 2025, the demand for these skills shows no signs of slowing down, making now an excellent time to begin this journey.
2 notes
·
View notes
Text
Top 10 In- Demand Tech Jobs in 2025

Technology is growing faster than ever, and so is the need for skilled professionals in the field. From artificial intelligence to cloud computing, businesses are looking for experts who can keep up with the latest advancements. These tech jobs not only pay well but also offer great career growth and exciting challenges.
In this blog, we’ll look at the top 10 tech jobs that are in high demand today. Whether you’re starting your career or thinking of learning new skills, these jobs can help you plan a bright future in the tech world.
1. AI and Machine Learning Specialists
Artificial Intelligence (AI) and Machine Learning are changing the game by helping machines learn and improve on their own without needing step-by-step instructions. They’re being used in many areas, like chatbots, spotting fraud, and predicting trends.
Key Skills: Python, TensorFlow, PyTorch, data analysis, deep learning, and natural language processing (NLP).
Industries Hiring: Healthcare, finance, retail, and manufacturing.
Career Tip: Keep up with AI and machine learning by working on projects and getting an AI certification. Joining AI hackathons helps you learn and meet others in the field.
2. Data Scientists
Data scientists work with large sets of data to find patterns, trends, and useful insights that help businesses make smart decisions. They play a key role in everything from personalized marketing to predicting health outcomes.
Key Skills: Data visualization, statistical analysis, R, Python, SQL, and data mining.
Industries Hiring: E-commerce, telecommunications, and pharmaceuticals.
Career Tip: Work with real-world data and build a strong portfolio to showcase your skills. Earning certifications in data science tools can help you stand out.
3. Cloud Computing Engineers: These professionals create and manage cloud systems that allow businesses to store data and run apps without needing physical servers, making operations more efficient.
Key Skills: AWS, Azure, Google Cloud Platform (GCP), DevOps, and containerization (Docker, Kubernetes).
Industries Hiring: IT services, startups, and enterprises undergoing digital transformation.
Career Tip: Get certified in cloud platforms like AWS (e.g., AWS Certified Solutions Architect).
4. Cybersecurity Experts
Cybersecurity professionals protect companies from data breaches, malware, and other online threats. As remote work grows, keeping digital information safe is more crucial than ever.
Key Skills: Ethical hacking, penetration testing, risk management, and cybersecurity tools.
Industries Hiring: Banking, IT, and government agencies.
Career Tip: Stay updated on new cybersecurity threats and trends. Certifications like CEH (Certified Ethical Hacker) or CISSP (Certified Information Systems Security Professional) can help you advance in your career.
5. Full-Stack Developers
Full-stack developers are skilled programmers who can work on both the front-end (what users see) and the back-end (server and database) of web applications.
Key Skills: JavaScript, React, Node.js, HTML/CSS, and APIs.
Industries Hiring: Tech startups, e-commerce, and digital media.
Career Tip: Create a strong GitHub profile with projects that highlight your full-stack skills. Learn popular frameworks like React Native to expand into mobile app development.
6. DevOps Engineers
DevOps engineers help make software faster and more reliable by connecting development and operations teams. They streamline the process for quicker deployments.
Key Skills: CI/CD pipelines, automation tools, scripting, and system administration.
Industries Hiring: SaaS companies, cloud service providers, and enterprise IT.
Career Tip: Earn key tools like Jenkins, Ansible, and Kubernetes, and develop scripting skills in languages like Bash or Python. Earning a DevOps certification is a plus and can enhance your expertise in the field.
7. Blockchain Developers
They build secure, transparent, and unchangeable systems. Blockchain is not just for cryptocurrencies; it’s also used in tracking supply chains, managing healthcare records, and even in voting systems.
Key Skills: Solidity, Ethereum, smart contracts, cryptography, and DApp development.
Industries Hiring: Fintech, logistics, and healthcare.
Career Tip: Create and share your own blockchain projects to show your skills. Joining blockchain communities can help you learn more and connect with others in the field.
8. Robotics Engineers
Robotics engineers design, build, and program robots to do tasks faster or safer than humans. Their work is especially important in industries like manufacturing and healthcare.
Key Skills: Programming (C++, Python), robotics process automation (RPA), and mechanical engineering.
Industries Hiring: Automotive, healthcare, and logistics.
Career Tip: Stay updated on new trends like self-driving cars and AI in robotics.
9. Internet of Things (IoT) Specialists
IoT specialists work on systems that connect devices to the internet, allowing them to communicate and be controlled easily. This is crucial for creating smart cities, homes, and industries.
Key Skills: Embedded systems, wireless communication protocols, data analytics, and IoT platforms.
Industries Hiring: Consumer electronics, automotive, and smart city projects.
Career Tip: Create IoT prototypes and learn to use platforms like AWS IoT or Microsoft Azure IoT. Stay updated on 5G technology and edge computing trends.
10. Product Managers
Product managers oversee the development of products, from idea to launch, making sure they are both technically possible and meet market demands. They connect technical teams with business stakeholders.
Key Skills: Agile methodologies, market research, UX design, and project management.
Industries Hiring: Software development, e-commerce, and SaaS companies.
Career Tip: Work on improving your communication and leadership skills. Getting certifications like PMP (Project Management Professional) or CSPO (Certified Scrum Product Owner) can help you advance.
Importance of Upskilling in the Tech Industry
Stay Up-to-Date: Technology changes fast, and learning new skills helps you keep up with the latest trends and tools.
Grow in Your Career: By learning new skills, you open doors to better job opportunities and promotions.
Earn a Higher Salary: The more skills you have, the more valuable you are to employers, which can lead to higher-paying jobs.
Feel More Confident: Learning new things makes you feel more prepared and ready to take on tougher tasks.
Adapt to Changes: Technology keeps evolving, and upskilling helps you stay flexible and ready for any new changes in the industry.
Top Companies Hiring for These Roles
Global Tech Giants: Google, Microsoft, Amazon, and IBM.
Startups: Fintech, health tech, and AI-based startups are often at the forefront of innovation.
Consulting Firms: Companies like Accenture, Deloitte, and PwC increasingly seek tech talent.
In conclusion, the tech world is constantly changing, and staying updated is key to having a successful career. In 2025, jobs in fields like AI, cybersecurity, data science, and software development will be in high demand. By learning the right skills and keeping up with new trends, you can prepare yourself for these exciting roles. Whether you're just starting or looking to improve your skills, the tech industry offers many opportunities for growth and success.
#Top 10 Tech Jobs in 2025#In- Demand Tech Jobs#High paying Tech Jobs#artificial intelligence#datascience#cybersecurity
2 notes
·
View notes
Text
Recent Activities of Transparent Tribe (APT36)

Transparent Tribe, also known as APT36, is a Pakistan-based threat group active since at least 2013. They have consistently targeted Indian government, defence, and aerospace sectors. Recent activities indicate a significant evolution in their tactics and tools.
May 2024: Targeting Indian Defence and Aerospace Sectors

In May 2024, Transparent Tribe intensified cyber-espionage operations against India's defence and aerospace sectors. They employed phishing emails containing malicious attachments or links to deploy various tools, including:
Crimson RAT: A remote access Trojan used for data theft and surveillance.
Poseidon: A Golang-based agent compatible with Linux and macOS systems.
Python-based downloaders: Compiled into ELF binaries to target Linux environments.
The group also exploited India's development of indigenous Linux-based operating systems, such as MayaOS, by distributing Executable and Linkable Format (ELF) binaries to compromise these systems. [Source]
Late 2023 to Early 2024: Evolution of ElizaRAT Malware
Between late 2023 and early 2024, Transparent Tribe advanced their malware capabilities by developing ElizaRAT, a Windows Remote Access Tool. ElizaRAT's evolution included:
Enhanced Evasion Techniques: Improved methods to avoid detection by security systems.
Cloud-Based Command and Control (C2): Utilisation of services like Google Drive, Telegram, and Slack for C2 communications.
Modular Payloads: Deployment of additional payloads such as ApoloStealer for targeted data exfiltration.
These developments indicate a strategic shift towards more sophisticated and flexible attack methodologies. [Source]
September 2023: Infrastructure Expansion and Linux Targeting
In September 2023, investigations revealed that Transparent Tribe expanded their infrastructure, employing Mythic C2 servers hosted on platforms like DigitalOcean. They also began targeting Linux environments by distributing malicious desktop entry files disguised as PDFs. This approach aimed to compromise systems running Linux-based operating systems, aligning with India's adoption of such systems in government sectors. [Source]
June 2023: Focus on Indian Education Sector
By June 2023, Transparent Tribe shifted focus to India's education sector, distributing education-themed malicious documents via phishing emails. These campaigns aimed to deploy Crimson RAT, enabling the group to exfiltrate sensitive information from educational institutions. [Source]
These recent activities demonstrate Transparent Tribe's persistent efforts to adapt and refine their tactics, expanding their target spectrum and enhancing their malware arsenal to effectively compromise systems across various sectors.
Author: Kelly Hector
Blog: Digitalworldvision
2 notes
·
View notes
Text
tag game
tagged by @strangethings-everywhere
last song: Energy by Fire Tiger because, goddamn, 80s music was something else and a contemporary band with an 80s style??? Killer. amazing. Give me more.
favorite color: A really dark purple or dark orange, I can't make up my mind!
currently watching: literally on the tv right now Scream Queens, but started watching Brilliant Minds when it began and have now gotten many in one of my servers hooked and watching them lose their minds is almost as much fun as watching the show!
currently reading: Vampire Boys: True Tales From Operators of the RAF's First Single-Engined Jet.
last movie: Last night rewatched MI4 because Ghost Protocol really got that Benji/Ethan falling in luuuuurrrvveee thang going on, but that I saw for the first time recently Tombstone. I don't know how I didn't watch it for so long but Val Kilmer was fucking robbed of a nomination and the resulting Oscar. but in general I don't truly need a movie to be "good" or "award worthy", I just need it to make my brain feel like pop rocks so the viewing habits are ...eclectic.
sweet/spicy/savory: sweet and salty together! But I have a super unhealthy love of salt but due to oddly enough unrelated issues, developed insanely high blood-pressure (gotta love a previously undiagnosed lethal heart condition while your doctor tells you you're fine!!) I've had to largely give up salt and I miss it sooo much
relationship status: single and not remotely caring to mingle.
current obsession: Tie between The Boys In The Boat and Top Gun because boys in boats and boys in planes. ~~i come from a family with a long military (largely naval) history and a bunch of pilots so its an illness I can't help it. Freud's ghost has been contacted. Apparently my call is very important to him...
last thing you googled: what the accident down the end of my road was about/caused by because shutting off the entire road system around it, managing to go fast enough on my road to flip not one, not two, but three cars onto the roof????? THE FUCK YOU DOING AT 7 AM!?!?!?
2 notes
·
View notes
Text
Thinking back on it, I did some really clever shit in my tenure as a field service engineer at the warehouse robotics company, that could have only worked because there were so many unsecured doors in the software
The V3s we had deployed when I first joined communicated with us over WiFi. We used Putty to remote in and run a long command to run the bootloader and start the main process
But you could also just run the main program without the long command. If you did that you could send opcodes to the motor controllers and get the wheels to turn or actuate functions. We were supposed to use this to run bench tests on test stands to ensure the robots were fit to put in production and home the motors
We did
But I also taught everyone how to send opcodes to manually drive the bots back to charging locations so we didnt have to push them. The only thing you had to be careful of was not putting a robot into production when it was running the program in the foreground as we called it. The long command to run in the background was needed as if you closed your putty window then the foreground program would stop
We did have someone crash a robot this way when they logged out while it was moving
But for being WiFi connected, this system was safer then you think as opcodes sent from a user could only be interpreted in foreground and system opcodes only received in background. Halting and restarting the program while the system was online would thow an error on the main command and control server that would put the robot out of service
Shit got wild when we upgraded to V4
Gone is the wifi in favor of 2.4 ghz radio. The robot also automatically runs the bootloader on power up. Foreground and background modes are a thing of the past. Now any commands sent on that radio channel, from the user or the command and control server, are accepted, no matter what
Granted, doing that while the system is on will desync the robot physically from what the system thinks is going on but it's smart enough to put the robot into ESTOP when the robot moves when it shouldn't
I however had different uses
There was several errors I encountered where I eventually diagnosed that the reason why a feature we had wasn't working was because the robot was given the opcode to move before it had been given the opcode to get ready to move
So the robot would try to move, fail to move, go Into ESTOP and then I'd step in. Id clear the estop on the robot locally with the estop clear opcode and send the opcode for getting ready to move. The robot, having gotten a command when it shouldn't, re-enters ESTOP and now everything is synced back up. I can now clear the estop normally and when the estop clears, the command server retries the last failed command. Now that the robot is ready to move, It drives away like nothing happened and operations resume
V5 shut down all my clever tricks. Maintenance mode is a physical switch that has to be pressed to do manually commands and pressing it power cycles the bot. Probably for the best tho
2 notes
·
View notes
Text
Ways to Protect Your VPS Against Online Threats
Leaks of customer information are devastating to businesses; not only may they damage the reputation of your firm, but they may also result in severe legal penalties. It is essential to have a solid understanding of virtual private server security in order to protect oneself from any dangers on the internet.
Nevertheless, in addition to adhering to the most effective security practices, you are required to perform routine checks on your virtual private server (VPS). In this article, you are given useful suggestions for protecting virtual private servers (VPS) from cyberattacks. Keep reading to learn more!
Tips for Virtual Private Server Cybersecurity in 2024-
If you are in charge of a web server, it is absolutely necessary to remain up to date on the most recent security measures and risks that are posed by the internet. In the year 2024, the following are the best practices for assuring the security of virtual private servers (VPS)!
Deactivate the Root Login feature
Any element of the server can be modified by the root user of a virtual private server (VPS), who has the highest level of operational entitlements. In an effort to seize control of the system, hackers might select this user as their target.
This login account can be removed, which would reinforce the defense against root access and safeguard your website from attacks. For the sake of server administration, we recommend creating a new login that is capable of accessing the root level and running commands.
Make your passwords more secure
The most easily guessed passwords are ones that are poorly crafted, such as those that contain common terms or data that may be identified. You may create secure passwords by mixing different types of characters, including numerals, special characters, and both uppercase and lowercase letters.
The use of secured password management software is something you might want to think about if you want to simply generate and save secure passwords. It is essential to keep in mind that originality is essential. Consequently, it is recommended that you change your passwords periodically, preferably once every three months, and never use the same password for more than one account. As a final precaution, you should never provide your root login credentials in order to prevent unauthorized access.
Modify the SSH Port That Is Default
If you continue to allow attackers to access your virtual server over the normal SSH port of 22, you are inviting them to do so. Port scanning and brute-force attacks are two methods that attackers can use to gain unauthorized access to a remote system. When securing virtual private server (VPS) cyber threats, it is important to lock out unauthorized users and change the default SSH listening port to something unexpected.
Restriction of User Access
When you have a significant number of users on your virtual private server (VPS) hosting, it is important to carefully plan out how rights and control will be distributed. Your server's sensitive data and assets are at risk of being compromised if you grant root access to each and every user. By understanding and applying the various forms of authorization, you can make certain that each user has access to only the permissions that they require.
In the event that an account is compromised, this strategy ensures that the damage is reduced to a minimum. Sensitive information and systems are protected from harm since this reduces the attack surface and diminishes the possible effects that could be caused by attacks that originate from within the organization. A further benefit is that it simplifies audit procedures, which makes it easier to monitor user activity and identify inconsistencies.
Put the Principles of Robust Authentication into Practice
A strong, one-of-a-kind password should be generated for each account on your virtual private server (VPS), and you should utilize multi-factor authentication (MFA) to add an extra layer of security. For remote access, you should make use of secure protocols such as SSH keys in order to prevent unauthorized login attempts and protect your virtual private server (VPS) against assaults that are based on credentials.
It is difficult to decode these automated keys because they are typically longer than passwords. Public and private keys are the components that makeup SSH keys. It is the device that is utilized that is responsible for storing the private key, whereas the server of the computer is where the public key is kept. In the event that an individual attempts to log in, the system generates an arbitrary string. This string is then protected with the public key. The only way to gain access is to decrypt this string using a private key that is compliant with the security system.
Set up a VPN on your VPS
Your information is at risk of being intercepted and stolen by third parties who are not authorized to do so if you use a public connection. Set up a virtual private network, also known as a VPN, to protect yourself from potential security threats and in order to avoid this. Using a virtual private network (VPN), your computer is able to conceal its true location while also directing traffic through a secure connection.
This is accomplished by using a new IP address. Your Internet Protocol (IP) address will be rendered untraceable as a result of this, which will allow you to remain anonymous surfing the internet. Your data is protected by a virtual private network (VPN), which also prevents hackers from accessing your communications. Alongside a firewall, it provides additional protection for virtual private servers (VPS).
Be sure to utilize firewalls
Your firewalls are your first line of defense against threats that come from the internet. APF and CSF are examples of programs that act as guardians by monitoring both incoming and outgoing traffic. They contribute to the identification and prevention of undesired movements at the entrance, so ensuring that only authorized traffic is allowed to pass through. They offer a barrier that can be designed to match the specific requirements and security requirements of your system, and it may be updated to meet those requirements.
Firewalls have a number of features that aid with the rapid identification and management of typical cyber risks. One of these features is the ability to generate detailed logs and send notifications regarding potential safety incidents. The fact that they are able to make adjustments and offer timely protections makes them an essential resource for cybersecurity.
Make sure you do regular backups
You should create backups of your data on a regular basis to protect it from being lost in the event that there is a data breach. Ensure that backups are stored in a location that is not the virtual private server (VPS) and implements automated backup mechanisms. Because of this, even if the virtual private server (VPS) is compromised, your essential data will remain secure and accessible.
Set up an antivirus program
Installing antivirus software on your virtual private server (VPS) will protect your data and forestall any compromises. An antivirus tool for your server performs continuous inspections of files and actions, much like software that has prevented the malware from infecting a large number of PCs around the world by spotting dangers in real-time.
Employ a Malware Scanner Software
Your virtual private server (VPS) is protected from dangers such as trojans and worms by an antivirus program; however, it may not be able to identify more recent exploits such as zero-day malware.
Combine antivirus software with a malware scanner to improve the security of your virtual private server (VPS). This category of software is able to update the detection rule more quickly, which enables it to differentiate between newer threats that are present on your system.
Check out the User Rights
If there are a large number of users on your virtual private server (VPS) hosting, you should give serious consideration to the distribution of control and rights. Your server's resources and sensitive data will be put at risk of security breaches if you grant root rights to every user on the server.
You should restrict the number of users who can access your server in order to avoid this problem. Managing users and assigning them varying permissions for particular files and system resources is one way to accomplish this goal.
Install systems that can detect and prevent intrusions
Monitoring and analyzing network traffic with the help of intrusion detection and prevention systems (IDPS) is a good way to keep an eye out for any indicators of malicious behavior or efforts to gain unauthorized access. IDPS's ability to detect and block threats in real time contributes to an overall improvement in the security posture of your virtual private server (VPS).
Conclusion-
The protection of your virtual private server (VPS) is essential since it stores sensitive information. In spite of the fact that Linux is well-known for its robust security, the virtual private server (VPS) nevertheless has weaknesses. Malware, sniffer and brute-force attacks, SQL injections, cross-site scripting (XSS), lacking function-level control, and incomplete authentication are some of the most common types of cyber assaults and problems that can occur in a Linux system. Virtual private server owners need to be aware of how to monitor the server and operating system in order to implement effective security measures in order to avoid these issues.

Dollar2host Dollar2host.com We provide expert Webhosting services for your desired needs Facebook Twitter Instagram YouTube
2 notes
·
View notes