#Network Configuration And Management
Explore tagged Tumblr posts
Text
ISACA Introduces New Google Cloud Platform Audit Program
ISACA, a global authority in digital trust, has introduced the Google Cloud Platform (GCP) Audit Program to aid auditors in assessing risk within cloud environments. The program includes a comprehensive spreadsheet guide for testing GCP services, covering crucial aspects like governance, network management, data security, and more. Misconfigurations and misunderstandings about shared cloud responsibilities are identified as significant risk factors, underscoring the need for auditors to understand GCP concepts related to identity and access management, the Organization/Project/Folder structure, and logging options enablement/disablement. The program equips auditors with the necessary tools to effectively evaluate the adequacy and effectiveness of GCP services, which have grown to become the third-largest cloud provider. ISACA members can access the audit program at no cost, while non-members can acquire it for a nominal fee.
Read More - https://www.techdogs.com/tech-news/business-wire/isaca-introduces-new-google-cloud-platform-audit-program
0 notes
Text
DEVOPS & CLOUD ENGINEERING-SPARK TECHNOLOGIES
Our services-led approach creates efficiencies and optimizes your IT environment, leading to better business outcomes. We create and execute strategies to unlock opportunities, providing effective design, deployment, and resourcing services. You can focus on your core business by removing the day-to-day management and operations of your hybrid IT estate through our multivendor Support Services and platform-delivered Managed Services.

#devops#automation#Configuration Management#cloud computing#hybrid cloud#cloud networking#private cloud compute
1 note
·
View note
Text
DEVOPS & CLOUD ENGINEERING-SPARK TECHNOLOGIES
Our services-led approach creates efficiencies and optimizes your IT environment, leading to better business outcomes. We create and execute strategies to unlock opportunities, providing effective design, deployment, and resourcing services. You can focus on your core business by removing the day-to-day management and operations of your hybrid IT estate through our multivendor Support Services and platform-delivered Managed Services.
#DevOps#Automation#Configuration Management#Cloud Computing#Hybrid Cloud#Cloud Networking#Private Cloud
0 notes
Text
DEVOPS & CLOUD ENGINEERING-SPARK TECHNOLOGIES
Our services-led approach creates efficiencies and optimizes your IT environment,leading to better business outcomes. We create and execute strategies to unlockopportunities, providing effective design, deployment, and resourcing services.You can focus on your core business by removing the day-to-day management andoperations of your hybrid IT estate through our multivendor Support Services andplatform-delivered Managed Services.

#DevOps#Automation#Configuration Management#Cloud Computing#Hybrid Cloud#Cloud Networking#Private Cloud
0 notes
Text
Managed Network Switch Configuration Kuwait - Tech And Beyond
In the bustling landscape of Kuwait's technological advancement, seamless connectivity is the lifeblood of businesses and institutions. Tech And Beyond stands out as a beacon of expertise in managed network switch configuration Kuwait, offering tailored solutions that ensure robust, secure, and efficient networks. In this article, we'll explore the key attributes that make Tech And Beyond a trusted partner for organizations seeking to optimize their network infrastructure in Kuwait.
I. The Critical Role of Network Switch Configuration:
Connectivity Backbone: Network switches form the backbone of connectivity, facilitating the flow of data within organizations and ensuring a smooth operational workflow.
Security and Efficiency: Properly configured switches are pivotal in enhancing network security, preventing unauthorized access, and optimizing data transmission for efficiency.
II. Tech And Beyond: Navigating Kuwait's Tech Landscape:
Profound Expertise: Tech And Beyond brings a wealth of profound expertise to Kuwait's tech scene, having successfully configured networks for diverse organizations.
Customized Solutions: The company adopts a client-centric approach, offering customized network switch configurations tailored to the specific needs of each client.
III. Managed Network Switch Configuration Solutions:
Strategic Planning: Tech And Beyond starts with strategic planning, analyzing the unique requirements of the organization to devise an optimal network configuration plan.
Implementation Excellence: The company excels in the implementation phase, configuring switches with precision to ensure seamless connectivity and optimal performance.
IV. Security-Centric Approaches:
Access Control Policies: Tech And Beyond implements robust access control policies, safeguarding networks against unauthorized access and potential security threats.
Encryption Standards: The company adheres to industry-leading encryption standards, ensuring that data transmission across the network remains secure and protected.
V. Performance Optimization:
Bandwidth Management: Tech And Beyond optimizes network performance by effectively managing bandwidth, preventing congestion and ensuring a smooth flow of data.
Quality of Service (QoS): The company implements QoS measures, prioritizing critical data traffic to enhance the overall user experience.
VI. Remote Network Monitoring:
24/7 Monitoring: Tech And Beyond provides continuous monitoring of configured networks, offering real-time insights and proactively addressing potential issues.
Remote Troubleshooting: The company is equipped to perform remote troubleshooting, minimizing downtime and ensuring the uninterrupted operation of networks.
VII. Compliance and Standards:
Regulatory Compliance: Tech And Beyond ensures that network switch configurations adhere to regulatory standards, keeping organizations in Kuwait compliant with industry regulations.
Best Practices: The company follows industry best practices, staying abreast of the latest standards to deliver configurations that meet the highest quality benchmarks.
VIII. Scalability and Future-Readiness:
Scalable Configurations: Tech And Beyond's configurations are designed for scalability, accommodating the growth and evolving needs of organizations over time.
Integration of Emerging Technologies: The company integrates emerging technologies into network configurations, ensuring that networks remain at the forefront of technological advancements.
IX. Client Training and Support:
User Training Programs: Tech And Beyond offers user training programs, empowering organizations to understand and maximize the benefits of their configured networks.
Responsive Support: The company provides responsive support, addressing client queries and concerns promptly to foster long-term partnerships.
X. Conclusion:
Tech And Beyond isn't just a provider of managed network switch configuration Kuwait; it's a catalyst for seamless connectivity and technological advancement in Kuwait. With a commitment to excellence, security, and future-readiness, the company stands as a reliable partner for organizations seeking to optimize their network infrastructure. For businesses and institutions navigating Kuwait's dynamic tech landscape, Tech And Beyond offers not just configurations but a transformative journey into a connected, secure, and efficient future.
0 notes
Text
Security Onion Install: Awesome Open Source Security for Home Lab
Security Onion Install: Awesome Open Source Security for Home Lab @securityonion #homelab #selfhosted #SecurityOnionInstallationGuide #NetworkSecuritySolutions #IntrusionDetectionSystem #OpenSourceSecurityPlatform #ThreatHuntingWithSecurityOnion
Security Onion is at the top of the list if you want an excellent security solution to try in your home lab or even for enterprise security monitoring. It provides many great security tools for threat hunting and overall security and is fairly easy to get up and running quickly. Table of contentsWhat is Security Onion?Intrusion Detection and Threat HuntingMonitoring and Log managementCommunity…
View On WordPress
#configuring Security Onion#Intrusion Detection System#log management practices#network monitoring and analysis#Network Security Solutions#open-source security platform#Security Onion deployment#Security Onion installation guide#threat hunting with Security Onion#virtual machine security setup
0 notes
Text
How I ditched streaming services and learned to love Linux: A step-by-step guide to building your very own personal media streaming server (V2.0: REVISED AND EXPANDED EDITION)
This is a revised, corrected and expanded version of my tutorial on setting up a personal media server that previously appeared on my old blog (donjuan-auxenfers). I expect that that post is still making the rounds (hopefully with my addendum on modifying group share permissions in Ubuntu to circumvent 0x8007003B "Unexpected Network Error" messages in Windows 10/11 when transferring files) but I have no way of checking. Anyway this new revised version of the tutorial corrects one or two small errors I discovered when rereading what I wrote, adds links to all products mentioned and is just more polished generally. I also expanded it a bit, pointing more adventurous users toward programs such as Sonarr/Radarr/Lidarr and Overseerr which can be used for automating user requests and media collection.
So then, what is this tutorial? This is a tutorial on how to build and set up your own personal media server using Ubuntu as an operating system and Plex (or Jellyfin) to not only manage your media, but to also stream that media to your devices both at home and abroad anywhere in the world where you have an internet connection. Its intent is to show you how building a personal media server and stuffing it full of films, TV, and music that you acquired through indiscriminate and voracious media piracy various legal methods will free you to completely ditch paid streaming services. No more will you have to pay for Disney+, Netflix, HBOMAX, Hulu, Amazon Prime, Peacock, CBS All Access, Paramount+, Crave or any other streaming service that is not named Criterion Channel. Instead whenever you want to watch your favourite films and television shows, you’ll have your own personal service that only features things that you want to see, with files that you have control over. And for music fans out there, both Jellyfin and Plex support music streaming, meaning you can even ditch music streaming services. Goodbye Spotify, Youtube Music, Tidal and Apple Music, welcome back unreasonably large MP3 (or FLAC) collections.
On the hardware front, I’m going to offer a few options catered towards different budgets and media library sizes. The cost of getting a media server up and running using this guide will cost you anywhere from $450 CAD/$325 USD at the low end to $1500 CAD/$1100 USD at the high end (it could go higher). My server was priced closer to the higher figure, but I went and got a lot more storage than most people need. If that seems like a little much, consider for a moment, do you have a roommate, a close friend, or a family member who would be willing to chip in a few bucks towards your little project provided they get access? Well that's how I funded my server. It might also be worth thinking about the cost over time, i.e. how much you spend yearly on subscriptions vs. a one time cost of setting up a server. Additionally there's just the joy of being able to scream "fuck you" at all those show cancelling, library deleting, hedge fund vampire CEOs who run the studios through denying them your money. Drive a stake through David Zaslav's heart.
On the software side I will walk you step-by-step through installing Ubuntu as your server's operating system, configuring your storage as a RAIDz array with ZFS, sharing your zpool to Windows with Samba, running a remote connection between your server and your Windows PC, and then a little about started with Plex/Jellyfin. Every terminal command you will need to input will be provided, and I even share a custom #bash script that will make used vs. available drive space on your server display correctly in Windows.
If you have a different preferred flavour of Linux (Arch, Manjaro, Redhat, Fedora, Mint, OpenSUSE, CentOS, Slackware etc. et. al.) and are aching to tell me off for being basic and using Ubuntu, this tutorial is not for you. The sort of person with a preferred Linux distro is the sort of person who can do this sort of thing in their sleep. Also I don't care. This tutorial is intended for the average home computer user. This is also why we’re not using a more exotic home server solution like running everything through Docker Containers and managing it through a dashboard like Homarr or Heimdall. While such solutions are fantastic and can be very easy to maintain once you have it all set up, wrapping your brain around Docker is a whole thing in and of itself. If you do follow this tutorial and had fun putting everything together, then I would encourage you to return in a year’s time, do your research and set up everything with Docker Containers.
Lastly, this is a tutorial aimed at Windows users. Although I was a daily user of OS X for many years (roughly 2008-2023) and I've dabbled quite a bit with various Linux distributions (mostly Ubuntu and Manjaro), my primary OS these days is Windows 11. Many things in this tutorial will still be applicable to Mac users, but others (e.g. setting up shares) you will have to look up for yourself. I doubt it would be difficult to do so.
Nothing in this tutorial will require feats of computing expertise. All you will need is a basic computer literacy (i.e. an understanding of what a filesystem and directory are, and a degree of comfort in the settings menu) and a willingness to learn a thing or two. While this guide may look overwhelming at first glance, it is only because I want to be as thorough as possible. I want you to understand exactly what it is you're doing, I don't want you to just blindly follow steps. If you half-way know what you’re doing, you will be much better prepared if you ever need to troubleshoot.
Honestly, once you have all the hardware ready it shouldn't take more than an afternoon or two to get everything up and running.
(This tutorial is just shy of seven thousand words long so the rest is under the cut.)
Step One: Choosing Your Hardware
Linux is a light weight operating system, depending on the distribution there's close to no bloat. There are recent distributions available at this very moment that will run perfectly fine on a fourteen year old i3 with 4GB of RAM. Moreover, running Plex or Jellyfin isn’t resource intensive in 90% of use cases. All this is to say, we don’t require an expensive or powerful computer. This means that there are several options available: 1) use an old computer you already have sitting around but aren't using 2) buy a used workstation from eBay, or what I believe to be the best option, 3) order an N100 Mini-PC from AliExpress or Amazon.
Note: If you already have an old PC sitting around that you’ve decided to use, fantastic, move on to the next step.
When weighing your options, keep a few things in mind: the number of people you expect to be streaming simultaneously at any one time, the resolution and bitrate of your media library (4k video takes a lot more processing power than 1080p) and most importantly, how many of those clients are going to be transcoding at any one time. Transcoding is what happens when the playback device does not natively support direct playback of the source file. This can happen for a number of reasons, such as the playback device's native resolution being lower than the file's internal resolution, or because the source file was encoded in a video codec unsupported by the playback device.
Ideally we want any transcoding to be performed by hardware. This means we should be looking for a computer with an Intel processor with Quick Sync. Quick Sync is a dedicated core on the CPU die designed specifically for video encoding and decoding. This specialized hardware makes for highly efficient transcoding both in terms of processing overhead and power draw. Without these Quick Sync cores, transcoding must be brute forced through software. This takes up much more of a CPU’s processing power and requires much more energy. But not all Quick Sync cores are created equal and you need to keep this in mind if you've decided either to use an old computer or to shop for a used workstation on eBay
Any Intel processor from second generation Core (Sandy Bridge circa 2011) onward has Quick Sync cores. It's not until 6th gen (Skylake), however, that the cores support the H.265 HEVC codec. Intel’s 10th gen (Comet Lake) processors introduce support for 10bit HEVC and HDR tone mapping. And the recent 12th gen (Alder Lake) processors brought with them hardware AV1 decoding. As an example, while an 8th gen (Kaby Lake) i5-8500 will be able to hardware transcode a H.265 encoded file, it will fall back to software transcoding if given a 10bit H.265 file. If you’ve decided to use that old PC or to look on eBay for an old Dell Optiplex keep this in mind.
Note 1: The price of old workstations varies wildly and fluctuates frequently. If you get lucky and go shopping shortly after a workplace has liquidated a large number of their workstations you can find deals for as low as $100 on a barebones system, but generally an i5-8500 workstation with 16gb RAM will cost you somewhere in the area of $260 CAD/$200 USD.
Note 2: The AMD equivalent to Quick Sync is called Video Core Next, and while it's fine, it's not as efficient and not as mature a technology. It was only introduced with the first generation Ryzen CPUs and it only got decent with their newest CPUs, we want something cheap.
Alternatively you could forgo having to keep track of what generation of CPU is equipped with Quick Sync cores that feature support for which codecs, and just buy an N100 mini-PC. For around the same price or less of a used workstation you can pick up a mini-PC with an Intel N100 processor. The N100 is a four-core processor based on the 12th gen Alder Lake architecture and comes equipped with the latest revision of the Quick Sync cores. These little processors offer astounding hardware transcoding capabilities for their size and power draw. Otherwise they perform equivalent to an i5-6500, which isn't a terrible CPU. A friend of mine uses an N100 machine as a dedicated retro emulation gaming system and it does everything up to 6th generation consoles just fine. The N100 is also a remarkably efficient chip, it sips power. In fact, the difference between running one of these and an old workstation could work out to hundreds of dollars a year in energy bills depending on where you live.
You can find these Mini-PCs all over Amazon or for a little cheaper on AliExpress. They range in price from $170 CAD/$125 USD for a no name N100 with 8GB RAM to $280 CAD/$200 USD for a Beelink S12 Pro with 16GB RAM. The brand doesn't really matter, they're all coming from the same three factories in Shenzen, go for whichever one fits your budget or has features you want. 8GB RAM should be enough, Linux is lightweight and Plex only calls for 2GB RAM. 16GB RAM might result in a slightly snappier experience, especially with ZFS. A 256GB SSD is more than enough for what we need as a boot drive, but going for a bigger drive might allow you to get away with things like creating preview thumbnails for Plex, but it’s up to you and your budget.
The Mini-PC I wound up buying was a Firebat AK2 Plus with 8GB RAM and a 256GB SSD. It looks like this:
Note: Be forewarned that if you decide to order a Mini-PC from AliExpress, note the type of power adapter it ships with. The mini-PC I bought came with an EU power adapter and I had to supply my own North American power supply. Thankfully this is a minor issue as barrel plug 30W/12V/2.5A power adapters are easy to find and can be had for $10.
Step Two: Choosing Your Storage
Storage is the most important part of our build. It is also the most expensive. Thankfully it’s also the most easily upgrade-able down the line.
For people with a smaller media collection (4TB to 8TB), a more limited budget, or who will only ever have two simultaneous streams running, I would say that the most economical course of action would be to buy a USB 3.0 8TB external HDD. Something like this one from Western Digital or this one from Seagate. One of these external drives will cost you in the area of $200 CAD/$140 USD. Down the line you could add a second external drive or replace it with a multi-drive RAIDz set up such as detailed below.
If a single external drive the path for you, move on to step three.
For people with larger media libraries (12TB+), who prefer media in 4k, or care who about data redundancy, the answer is a RAID array featuring multiple HDDs in an enclosure.
Note: If you are using an old PC or used workstatiom as your server and have the room for at least three 3.5" drives, and as many open SATA ports on your mother board you won't need an enclosure, just install the drives into the case. If your old computer is a laptop or doesn’t have room for more internal drives, then I would suggest an enclosure.
The minimum number of drives needed to run a RAIDz array is three, and seeing as RAIDz is what we will be using, you should be looking for an enclosure with three to five bays. I think that four disks makes for a good compromise for a home server. Regardless of whether you go for a three, four, or five bay enclosure, do be aware that in a RAIDz array the space equivalent of one of the drives will be dedicated to parity at a ratio expressed by the equation 1 − 1/n i.e. in a four bay enclosure equipped with four 12TB drives, if we configured our drives in a RAIDz1 array we would be left with a total of 36TB of usable space (48TB raw size). The reason for why we might sacrifice storage space in such a manner will be explained in the next section.
A four bay enclosure will cost somewhere in the area of $200 CDN/$140 USD. You don't need anything fancy, we don't need anything with hardware RAID controls (RAIDz is done entirely in software) or even USB-C. An enclosure with USB 3.0 will perform perfectly fine. Don’t worry too much about USB speed bottlenecks. A mechanical HDD will be limited by the speed of its mechanism long before before it will be limited by the speed of a USB connection. I've seen decent looking enclosures from TerraMaster, Yottamaster, Mediasonic and Sabrent.
When it comes to selecting the drives, as of this writing, the best value (dollar per gigabyte) are those in the range of 12TB to 20TB. I settled on 12TB drives myself. If 12TB to 20TB drives are out of your budget, go with what you can afford, or look into refurbished drives. I'm not sold on the idea of refurbished drives but many people swear by them.
When shopping for harddrives, search for drives designed specifically for NAS use. Drives designed for NAS use typically have better vibration dampening and are designed to be active 24/7. They will also often make use of CMR (conventional magnetic recording) as opposed to SMR (shingled magnetic recording). This nets them a sizable read/write performance bump over typical desktop drives. Seagate Ironwolf and Toshiba NAS are both well regarded brands when it comes to NAS drives. I would avoid Western Digital Red drives at this time. WD Reds were a go to recommendation up until earlier this year when it was revealed that they feature firmware that will throw up false SMART warnings telling you to replace the drive at the three year mark quite often when there is nothing at all wrong with that drive. It will likely even be good for another six, seven, or more years.
Step Three: Installing Linux
For this step you will need a USB thumbdrive of at least 6GB in capacity, an .ISO of Ubuntu, and a way to make that thumbdrive bootable media.
First download a copy of Ubuntu desktop (for best performance we could download the Server release, but for new Linux users I would recommend against the server release. The server release is strictly command line interface only, and having a GUI is very helpful for most people. Not many people are wholly comfortable doing everything through the command line, I'm certainly not one of them, and I grew up with DOS 6.0. 22.04.3 Jammy Jellyfish is the current Long Term Service release, this is the one to get.
Download the .ISO and then download and install balenaEtcher on your Windows PC. BalenaEtcher is an easy to use program for creating bootable media, you simply insert your thumbdrive, select the .ISO you just downloaded, and it will create a bootable installation media for you.
Once you've made a bootable media and you've got your Mini-PC (or you old PC/used workstation) in front of you, hook it directly into your router with an ethernet cable, and then plug in the HDD enclosure, a monitor, a mouse and a keyboard. Now turn that sucker on and hit whatever key gets you into the BIOS (typically ESC, DEL or F2). If you’re using a Mini-PC check to make sure that the P1 and P2 power limits are set correctly, my N100's P1 limit was set at 10W, a full 20W under the chip's power limit. Also make sure that the RAM is running at the advertised speed. My Mini-PC’s RAM was set at 2333Mhz out of the box when it should have been 3200Mhz. Once you’ve done that, key over to the boot order and place the USB drive first in the boot order. Then save the BIOS settings and restart.
After you restart you’ll be greeted by Ubuntu's installation screen. Installing Ubuntu is really straight forward, select the "minimal" installation option, as we won't need anything on this computer except for a browser (Ubuntu comes preinstalled with Firefox) and Plex Media Server/Jellyfin Media Server. Also remember to delete and reformat that Windows partition! We don't need it.
Step Four: Installing ZFS and Setting Up the RAIDz Array
Note: If you opted for just a single external HDD skip this step and move onto setting up a Samba share.
Once Ubuntu is installed it's time to configure our storage by installing ZFS to build our RAIDz array. ZFS is a "next-gen" file system that is both massively flexible and massively complex. It's capable of snapshot backup, self healing error correction, ZFS pools can be configured with drives operating in a supplemental manner alongside the storage vdev (e.g. fast cache, dedicated secondary intent log, hot swap spares etc.). It's also a file system very amenable to fine tuning. Block and sector size are adjustable to use case and you're afforded the option of different methods of inline compression. If you'd like a very detailed overview and explanation of its various features and tips on tuning a ZFS array check out these articles from Ars Technica. For now we're going to ignore all these features and keep it simple, we're going to pull our drives together into a single vdev running in RAIDz which will be the entirety of our zpool, no fancy cache drive or SLOG.
Open up the terminal and type the following commands:
sudo apt update
then
sudo apt install zfsutils-linux
This will install the ZFS utility. Verify that it's installed with the following command:
zfs --version
Now, it's time to check that the HDDs we have in the enclosure are healthy, running, and recognized. We also want to find out their device IDs and take note of them:
sudo fdisk -1
Note: You might be wondering why some of these commands require "sudo" in front of them while others don't. "Sudo" is short for "super user do”. When and where "sudo" is used has to do with the way permissions are set up in Linux. Only the "root" user has the access level to perform certain tasks in Linux. As a matter of security and safety regular user accounts are kept separate from the "root" user. It's not advised (or even possible) to boot into Linux as "root" with most modern distributions. Instead by using "sudo" our regular user account is temporarily given the power to do otherwise forbidden things. Don't worry about it too much at this stage, but if you want to know more check out this introduction.
If everything is working you should get a list of the various drives detected along with their device IDs which will look like this: /dev/sdc. You can also check the device IDs of the drives by opening the disk utility app. Jot these IDs down as we'll need them for our next step, creating our RAIDz array.
RAIDz is similar to RAID-5 in that instead of striping your data over multiple disks, exchanging redundancy for speed and available space (RAID-0), or mirroring your data writing by two copies of every piece (RAID-1), it instead writes parity blocks across the disks in addition to striping, this provides a balance of speed, redundancy and available space. If a single drive fails, the parity blocks on the working drives can be used to reconstruct the entire array as soon as a replacement drive is added.
Additionally, RAIDz improves over some of the common RAID-5 flaws. It's more resilient and capable of self healing, as it is capable of automatically checking for errors against a checksum. It's more forgiving in this way, and it's likely that you'll be able to detect when a drive is dying well before it fails. A RAIDz array can survive the loss of any one drive.
Note: While RAIDz is indeed resilient, if a second drive fails during the rebuild, you're fucked. Always keep backups of things you can't afford to lose. This tutorial, however, is not about proper data safety.
To create the pool, use the following command:
sudo zpool create "zpoolnamehere" raidz "device IDs of drives we're putting in the pool"
For example, let's creatively name our zpool "mypool". This poil will consist of four drives which have the device IDs: sdb, sdc, sdd, and sde. The resulting command will look like this:
sudo zpool create mypool raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde
If as an example you bought five HDDs and decided you wanted more redundancy dedicating two drive to this purpose, we would modify the command to "raidz2" and the command would look something like the following:
sudo zpool create mypool raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
An array configured like this is known as RAIDz2 and is able to survive two disk failures.
Once the zpool has been created, we can check its status with the command:
zpool status
Or more concisely with:
zpool list
The nice thing about ZFS as a file system is that a pool is ready to go immediately after creation. If we were to set up a traditional RAID-5 array using mbam, we'd have to sit through a potentially hours long process of reformatting and partitioning the drives. Instead we're ready to go right out the gates.
The zpool should be automatically mounted to the filesystem after creation, check on that with the following:
df -hT | grep zfs
Note: If your computer ever loses power suddenly, say in event of a power outage, you may have to re-import your pool. In most cases, ZFS will automatically import and mount your pool, but if it doesn’t and you can't see your array, simply open the terminal and type sudo zpool import -a.
By default a zpool is mounted at /"zpoolname". The pool should be under our ownership but let's make sure with the following command:
sudo chown -R "yourlinuxusername" /"zpoolname"
Note: Changing file and folder ownership with "chown" and file and folder permissions with "chmod" are essential commands for much of the admin work in Linux, but we won't be dealing with them extensively in this guide. If you'd like a deeper tutorial and explanation you can check out these two guides: chown and chmod.
You can access the zpool file system through the GUI by opening the file manager (the Ubuntu default file manager is called Nautilus) and clicking on "Other Locations" on the sidebar, then entering the Ubuntu file system and looking for a folder with your pool's name. Bookmark the folder on the sidebar for easy access.
Your storage pool is now ready to go. Assuming that we already have some files on our Windows PC we want to copy to over, we're going to need to install and configure Samba to make the pool accessible in Windows.
Step Five: Setting Up Samba/Sharing
Samba is what's going to let us share the zpool with Windows and allow us to write to it from our Windows machine. First let's install Samba with the following commands:
sudo apt-get update
then
sudo apt-get install samba
Next create a password for Samba.
sudo smbpswd -a "yourlinuxusername"
It will then prompt you to create a password. Just reuse your Ubuntu user password for simplicity's sake.
Note: if you're using just a single external drive replace the zpool location in the following commands with wherever it is your external drive is mounted, for more information see this guide on mounting an external drive in Ubuntu.
After you've created a password we're going to create a shareable folder in our pool with this command
mkdir /"zpoolname"/"foldername"
Now we're going to open the smb.conf file and make that folder shareable. Enter the following command.
sudo nano /etc/samba/smb.conf
This will open the .conf file in nano, the terminal text editor program. Now at the end of smb.conf add the following entry:
["foldername"]
path = /"zpoolname"/"foldername"
available = yes
valid users = "yourlinuxusername"
read only = no
writable = yes
browseable = yes
guest ok = no
Ensure that there are no line breaks between the lines and that there's a space on both sides of the equals sign. Our next step is to allow Samba traffic through the firewall:
sudo ufw allow samba
Finally restart the Samba service:
sudo systemctl restart smbd
At this point we'll be able to access to the pool, browse its contents, and read and write to it from Windows. But there's one more thing left to do, Windows doesn't natively support the ZFS file systems and will read the used/available/total space in the pool incorrectly. Windows will read available space as total drive space, and all used space as null. This leads to Windows only displaying a dwindling amount of "available" space as the drives are filled. We can fix this! Functionally this doesn't actually matter, we can still write and read to and from the disk, it just makes it difficult to tell at a glance the proportion of used/available space, so this is an optional step but one I recommend (this step is also unnecessary if you're just using a single external drive). What we're going to do is write a little shell script in #bash. Open nano with the terminal with the command:
nano
Now insert the following code:
#!/bin/bash CUR_PATH=`pwd` ZFS_CHECK_OUTPUT=$(zfs get type $CUR_PATH 2>&1 > /dev/null) > /dev/null if [[ $ZFS_CHECK_OUTPUT == *not\ a\ ZFS* ]] then IS_ZFS=false else IS_ZFS=true fi if [[ $IS_ZFS = false ]] then df $CUR_PATH | tail -1 | awk '{print $2" "$4}' else USED=$((`zfs get -o value -Hp used $CUR_PATH` / 1024)) > /dev/null AVAIL=$((`zfs get -o value -Hp available $CUR_PATH` / 1024)) > /dev/null TOTAL=$(($USED+$AVAIL)) > /dev/null echo $TOTAL $AVAIL fi
Save the script as "dfree.sh" to /home/"yourlinuxusername" then change the ownership of the file to make it executable with this command:
sudo chmod 774 dfree.sh
Now open smb.conf with sudo again:
sudo nano /etc/samba/smb.conf
Now add this entry to the top of the configuration file to direct Samba to use the results of our script when Windows asks for a reading on the pool's used/available/total drive space:
[global]
dfree command = /home/"yourlinuxusername"/dfree.sh
Save the changes to smb.conf and then restart Samba again with the terminal:
sudo systemctl restart smbd
Now there’s one more thing we need to do to fully set up the Samba share, and that’s to modify a hidden group permission. In the terminal window type the following command:
usermod -a -G sambashare “yourlinuxusername”
Then restart samba again:
sudo systemctl restart smbd
If we don’t do this last step, everything will appear to work fine, and you will even be able to see and map the drive from Windows and even begin transferring files, but you'd soon run into a lot of frustration. As every ten minutes or so a file would fail to transfer and you would get a window announcing “0x8007003B Unexpected Network Error”. This window would require your manual input to continue the transfer with the file next in the queue. And at the end it would reattempt to transfer whichever files failed the first time around. 99% of the time they’ll go through that second try, but this is still all a major pain in the ass. Especially if you’ve got a lot of data to transfer or you want to step away from the computer for a while.
It turns out samba can act a little weirdly with the higher read/write speeds of RAIDz arrays and transfers from Windows, and will intermittently crash and restart itself if this group option isn’t changed. Inputting the above command will prevent you from ever seeing that window.
The last thing we're going to do before switching over to our Windows PC is grab the IP address of our Linux machine. Enter the following command:
hostname -I
This will spit out this computer's IP address on the local network (it will look something like 192.168.0.x), write it down. It might be a good idea once you're done here to go into your router settings and reserving that IP for your Linux system in the DHCP settings. Check the manual for your specific model router on how to access its settings, typically it can be accessed by opening a browser and typing http:\\192.168.0.1 in the address bar, but your router may be different.
Okay we’re done with our Linux computer for now. Get on over to your Windows PC, open File Explorer, right click on Network and click "Map network drive". Select Z: as the drive letter (you don't want to map the network drive to a letter you could conceivably be using for other purposes) and enter the IP of your Linux machine and location of the share like so: \\"LINUXCOMPUTERLOCALIPADDRESSGOESHERE"\"zpoolnamegoeshere"\. Windows will then ask you for your username and password, enter the ones you set earlier in Samba and you're good. If you've done everything right it should look something like this:
You can now start moving media over from Windows to the share folder. It's a good idea to have a hard line running to all machines. Moving files over Wi-Fi is going to be tortuously slow, the only thing that’s going to make the transfer time tolerable (hours instead of days) is a solid wired connection between both machines and your router.
Step Six: Setting Up Remote Desktop Access to Your Server
After the server is up and going, you’ll want to be able to access it remotely from Windows. Barring serious maintenance/updates, this is how you'll access it most of the time. On your Linux system open the terminal and enter:
sudo apt install xrdp
Then:
sudo systemctl enable xrdp
Once it's finished installing, open “Settings” on the sidebar and turn off "automatic login" in the User category. Then log out of your account. Attempting to remotely connect to your Linux computer while you’re logged in will result in a black screen!
Now get back on your Windows PC, open search and look for "RDP". A program called "Remote Desktop Connection" should pop up, open this program as an administrator by right-clicking and selecting “run as an administrator”. You’ll be greeted with a window. In the field marked “Computer” type in the IP address of your Linux computer. Press connect and you'll be greeted with a new window and prompt asking for your username and password. Enter your Ubuntu username and password here.
If everything went right, you’ll be logged into your Linux computer. If the performance is sluggish, adjust the display options. Lowering the resolution and colour depth do a lot to make the interface feel snappier.
Remote access is how we're going to be using our Linux system from now, barring edge cases like needing to get into the BIOS or upgrading to a new version of Ubuntu. Everything else from performing maintenance like a monthly zpool scrub to checking zpool status and updating software can all be done remotely.
This is how my server lives its life now, happily humming and chirping away on the floor next to the couch in a corner of the living room.
Step Seven: Plex Media Server/Jellyfin
Okay we’ve got all the ground work finished and our server is almost up and running. We’ve got Ubuntu up and running, our storage array is primed, we’ve set up remote connections and sharing, and maybe we’ve moved over some of favourite movies and TV shows.
Now we need to decide on the media server software to use which will stream our media to us and organize our library. For most people I’d recommend Plex. It just works 99% of the time. That said, Jellyfin has a lot to recommend it by too, even if it is rougher around the edges. Some people run both simultaneously, it’s not that big of an extra strain. I do recommend doing a little bit of your own research into the features each platform offers, but as a quick run down, consider some of the following points:
Plex is closed source and is funded through PlexPass purchases while Jellyfin is open source and entirely user driven. This means a number of things: for one, Plex requires you to purchase a “PlexPass” (purchased as a one time lifetime fee $159.99 CDN/$120 USD or paid for on a monthly or yearly subscription basis) in order to access to certain features, like hardware transcoding (and we want hardware transcoding) or automated intro/credits detection and skipping, Jellyfin offers some of these features for free through plugins. Plex supports a lot more devices than Jellyfin and updates more frequently. That said, Jellyfin's Android and iOS apps are completely free, while the Plex Android and iOS apps must be activated for a one time cost of $6 CDN/$5 USD. But that $6 fee gets you a mobile app that is much more functional and features a unified UI across platforms, the Plex mobile apps are simply a more polished experience. The Jellyfin apps are a bit of a mess and the iOS and Android versions are very different from each other.
Jellyfin’s actual media player is more fully featured than Plex's, but on the other hand Jellyfin's UI, library customization and automatic media tagging really pale in comparison to Plex. Streaming your music library is free through both Jellyfin and Plex, but Plex offers the PlexAmp app for dedicated music streaming which boasts a number of fantastic features, unfortunately some of those fantastic features require a PlexPass. If your internet is down, Jellyfin can still do local streaming, while Plex can fail to play files unless you've got it set up a certain way. Jellyfin has a slew of neat niche features like support for Comic Book libraries with the .cbz/.cbt file types, but then Plex offers some free ad-supported TV and films, they even have a free channel that plays nothing but Classic Doctor Who.
Ultimately it's up to you, I settled on Plex because although some features are pay-walled, it just works. It's more reliable and easier to use, and a one-time fee is much easier to swallow than a subscription. I had a pretty easy time getting my boomer parents and tech illiterate brother introduced to and using Plex and I don't know if I would've had as easy a time doing that with Jellyfin. I do also need to mention that Jellyfin does take a little extra bit of tinkering to get going in Ubuntu, you’ll have to set up process permissions, so if you're more tolerant to tinkering, Jellyfin might be up your alley and I’ll trust that you can follow their installation and configuration guide. For everyone else, I recommend Plex.
So pick your poison: Plex or Jellyfin.
Note: The easiest way to download and install either of these packages in Ubuntu is through Snap Store.
After you've installed one (or both), opening either app will launch a browser window into the browser version of the app allowing you to set all the options server side.
The process of adding creating media libraries is essentially the same in both Plex and Jellyfin. You create a separate libraries for Television, Movies, and Music and add the folders which contain the respective types of media to their respective libraries. The only difficult or time consuming aspect is ensuring that your files and folders follow the appropriate naming conventions:
Plex naming guide for Movies
Plex naming guide for Television
Jellyfin follows the same naming rules but I find their media scanner to be a lot less accurate and forgiving than Plex. Once you've selected the folders to be scanned the service will scan your files, tagging everything and adding metadata. Although I find do find Plex more accurate, it can still erroneously tag some things and you might have to manually clean up some tags in a large library. (When I initially created my library it tagged the 1963-1989 Doctor Who as some Korean soap opera and I needed to manually select the correct match after which everything was tagged normally.) It can also be a bit testy with anime (especially OVAs) be sure to check TVDB to ensure that you have your files and folders structured and named correctly. If something is not showing up at all, double check the name.
Once that's done, organizing and customizing your library is easy. You can set up collections, grouping items together to fit a theme or collect together all the entries in a franchise. You can make playlists, and add custom artwork to entries. It's fun setting up collections with posters to match, there are even several websites dedicated to help you do this like PosterDB. As an example, below are two collections in my library, one collecting all the entries in a franchise, the other follows a theme.
My Star Trek collection, featuring all eleven television series, and thirteen films.
My Best of the Worst collection, featuring sixty-nine films previously showcased on RedLetterMedia’s Best of the Worst. They’re all absolutely terrible and I love them.
As for settings, ensure you've got Remote Access going, it should work automatically and be sure to set your upload speed after running a speed test. In the library settings set the database cache to 2000MB to ensure a snappier and more responsive browsing experience, and then check that playback quality is set to original/maximum. If you’re severely bandwidth limited on your upload and have remote users, you might want to limit the remote stream bitrate to something more reasonable, just as a note of comparison Netflix’s 1080p bitrate is approximately 5Mbps, although almost anyone watching through a chromium based browser is streaming at 720p and 3mbps. Other than that you should be good to go. For actually playing your files, there's a Plex app for just about every platform imaginable. I mostly watch television and films on my laptop using the Windows Plex app, but I also use the Android app which can broadcast to the chromecast connected to the TV in the office and the Android TV app for our smart TV. Both are fully functional and easy to navigate, and I can also attest to the OS X version being equally functional.
Part Eight: Finding Media
Now, this is not really a piracy tutorial, there are plenty of those out there. But if you’re unaware, BitTorrent is free and pretty easy to use, just pick a client (qBittorrent is the best) and go find some public trackers to peruse. Just know now that all the best trackers are private and invite only, and that they can be exceptionally difficult to get into. I’m already on a few, and even then, some of the best ones are wholly out of my reach.
If you decide to take the left hand path and turn to Usenet you’ll have to pay. First you’ll need to sign up with a provider like Newshosting or EasyNews for access to Usenet itself, and then to actually find anything you’re going to need to sign up with an indexer like NZBGeek or NZBFinder. There are dozens of indexers, and many people cross post between them, but for more obscure media it’s worth checking multiple. You’ll also need a binary downloader like SABnzbd. That caveat aside, Usenet is faster, bigger, older, less traceable than BitTorrent, and altogether slicker. I honestly prefer it, and I'm kicking myself for taking this long to start using it because I was scared off by the price. I’ve found so many things on Usenet that I had sought in vain elsewhere for years, like a 2010 Italian film about a massacre perpetrated by the SS that played the festival circuit but never received a home media release; some absolute hero uploaded a rip of a festival screener DVD to Usenet. Anyway, figure out the rest of this shit on your own and remember to use protection, get yourself behind a VPN, use a SOCKS5 proxy with your BitTorrent client, etc.
On the legal side of things, if you’re around my age, you (or your family) probably have a big pile of DVDs and Blu-Rays sitting around unwatched and half forgotten. Why not do a bit of amateur media preservation, rip them and upload them to your server for easier access? (Your tools for this are going to be Handbrake to do the ripping and AnyDVD to break any encryption.) I went to the trouble of ripping all my SCTV DVDs (five box sets worth) because none of it is on streaming nor could it be found on any pirate source I tried. I’m glad I did, forty years on it’s still one of the funniest shows to ever be on TV.
Part Nine/Epilogue: Sonarr/Radarr/Lidarr and Overseerr
There are a lot of ways to automate your server for better functionality or to add features you and other users might find useful. Sonarr, Radarr, and Lidarr are a part of a suite of “Servarr” services (there’s also Readarr for books and Whisparr for adult content) that allow you to automate the collection of new episodes of TV shows (Sonarr), new movie releases (Radarr) and music releases (Lidarr). They hook in to your BitTorrent client or Usenet binary newsgroup downloader and crawl your preferred Torrent trackers and Usenet indexers, alerting you to new releases and automatically grabbing them. You can also use these services to manually search for new media, and even replace/upgrade your existing media with better quality uploads. They’re really a little tricky to set up on a bare metal Ubuntu install (ideally you should be running them in Docker Containers), and I won’t be providing a step by step on installing and running them, I’m simply making you aware of their existence.
The other bit of kit I want to make you aware of is Overseerr which is a program that scans your Plex media library and will serve recommendations based on what you like. It also allows you and your users to request specific media. It can even be integrated with Sonarr/Radarr/Lidarr so that fulfilling those requests is fully automated.
And you're done. It really wasn't all that hard. Enjoy your media. Enjoy the control you have over that media. And be safe in the knowledge that no hedgefund CEO motherfucker who hates the movies but who is somehow in control of a major studio will be able to disappear anything in your library as a tax write-off.
1K notes
·
View notes
Text
Grand Trine ✧ their strength & weakness
A grand trine is an astrological structure that occurs when three planets or points form a triangle in the same element, forming an equilateral triangle at a harmonious 120-degree angle to each other.
✧
they have advantageous natural conditions that can make life easier, but it doesn't mean one can completely relax and rely on those advantages
be cautious about their weaknesses and limitations, and maintain a humble and proactive mindset
things might seem smooth sailing from the start, we cannot ignore the natural cycle of change. We must always be prepared to face adversity
recognize our own strengths and weaknesses, maintain humility and a sense of crisis
•
>> Venus sign • check how you pursue love & beauty>> Mars ▸ and their cup of tea ☼
✧ Fire Grand Trine (Aries, Leo and Sagittarius)
It brings a harmonious and fortunate influence - with passion, creativity, and enthusiasm. Fire grand trine is present in a birth chart, it indicates a person who is dynamic, confident, and can easily harness their inner passion to achieve their goals. They may have a natural talent for leadership, entrepreneurship, or any endeavor that requires a bold and adventurous spirit. The balanced energies of the fire grand trine can also foster a sense of optimism, spontaneity, and a zest for life. Individuals with this configuration may find it easy to maintain a positive outlook and inspire those around them. Fire grand trine is not a guarantee of success or an entirely problem-free life. It must be considered in the context of the entire birth chart and the individual's personal circumstances.
✧ Earth Grand Trine (Taurus, Virgo and Capricorn)
People with an Earth grand trine tend to have a very grounded, practical, and realistic personality. They prefer order, routine, and stability, and place a high value on feeling secure and comfortable. Earth signs are very sensory-oriented, and find joy and solace in the physical, material world around them. They are often hesitant to leave their comfort zones, preferring to act only when they feel completely ready. While the Earth grand trine indicates strong creative potential and the ability to manifest their ideals, actually taking action can be a challenge for them. Some may even feel they don't need to work hard, relying on government assistance or a wealthy partner to provide for them, as they don't overly prioritize financial matters. This nonchalance towards money management can also lead to financial troubles, as they sometimes are overly optimistic and assume good times will continue. Earth grand trine shows a lack of initiative and adventurous spirit, with a tendency to cling to comfort and ease. Overcoming these limitations will help them to realize their capabilities.
✧ Air Grand Trine (Gemini, Libra and Aquarius)
Air signs excel at mental abilities and social interaction - they are naturally inclined towards networking and communication. Those with an Air grand trine are skilled at making connections, conceptualizing ideas, and expanding their knowledge. They have active, agile minds and enjoy theoretical discussions, maintaining an objective, detached perspective. While Air signs seem to relish socializing, they often lack deep emotional experiences, preferring to skim across the surface. They are considered among the most detached of the signs. Air signs tend to think before acting, but their short attention spans can lead them to become all talk and no action. Some may become so enthusiastic about their learning that they neglect practical application, forever remaining in a state of learning without fully implementing their knowledge. Translating complex ideas into concrete reality is still a challenge, as they struggle to break free from the constraints of thought alone.
✧ Water Grand Trine (Cancer, Scorpio and Pisces)
Water grand trine is deeply connected to the realms of emotion and spirituality. Water grand trine are particularly sensitive and skilled at attuning to their own and others' inner lives. Their feelings run deep, often tapping into the subconscious and transcendent. They typically trust their intuition and inner senses, relying more on gut feelings than logic in their decision-making. They are highly focused on the spiritual and psychological dimensions of life. It shows how they express their emotions - they experience life's events with profound emotional intensity. This depth of feeling can sometimes make them overly sensitive and prone to emotional ups and downs. Yet the same quality allows them to be highly empathetic and caring when others need support. Conversely, they may at times even have emotional crises, seeking healing and transformation. They can swing between being perpetually immersed in their feelings, or completely unable to find someone who fulfills their emotional needs. They have exceptional artistic gifts, though some may keep this creativity within themselves rather than fully sharing it. They often enjoy solitude, as it provides the self-nourishment and contentment they crave.
•
Quick access >>
>> Back to Masterlist ✧ Explicit Content
Exclusive access : Patreon
/ instagram : @le.sinex / @botanicalsword
Moon signs ♡ Being in love vs Falling out of love
Mercury - interpreted as they are
Mars Sign's Approaches to Relationships
❥ Synastry / Composite Chart Observations
#grand trine#earth signs#fire signs#astro community#astro posts#air signs#water signs#water moon#overlays#synastry#synastry observations#astrology placement#astro#loa#astro observations#astrology#watercolour art#astrology placements#asteroid astrology#astro degree#astrology observations#electional astrology#astrocartography#astro placements#astro memes#vedic astro notes#lilith astrology#astronomy#astro notes#8th house
368 notes
·
View notes
Text
Stellium in the 10th or MC☀️
Having a 10th house stellium is a very powerful Configuration to have😩. A 10th house stellium often amplifies themes related to career, public life, reputation, and legacy. It emphasizes a strong focus on personal achievements, ambitions, and societal recognition📈. Naturally ruled by Capricorn and Saturn people with this stellium may feel a deep drive to leave a mark on the world for better or worse, but often prioritizing their professional or public image. It can also bring challenges like work-life imbalances or excessive concerns with external validation. There may be a heightened awareness of how others perceive you with a desire to earn respect and recognition. This house reflects the need for discipline and planning to achieve long-term success. I wouldn't be surprised if a 10th house stellium gains fame at some point in life. Planets like the Sun, Venus, Mars, Mercury, Jupiter, or Saturn is good to have here to gain fame 🤩. Positive aspects from the Sun, Venus, Jupiter, or Pluto can enhance public recognition, while supportive aspects from benefics (Venus, Jupiter) or outer planets (Uranus, Neptune, Pluto) can increase potential for fame. Prominent fixed stars can relate to fame as well. Not all attention is positive😒. This placement can bring scrutiny, high expectations from others or self or workaholic tendencies. Fame, if achieved, often requires significant effort and responsibilities due to Saturn's influence! The Tenth House strives on ambition and structure, so remember to set clear and long-term goals for your career and public life. Break those goals into manageable steps to stay focused. Use social media, networking, or public platforms to showcase your talents or ideas 💡. Maintaining personal connections and emotional health is crucial. Lean on the opposing 4th house (home and inner life) for grounding❤️.
88 notes
·
View notes
Text
Basic Linux Security (Updated 2025)
Install Unattended Upgrades and enable the "unattended-upgrades" service.
Install ClamAV and enable "clamav-freshclam" service.
Install and run Lynis to audit your OS.
Use the "last -20" command to see the last 20 users that have been on the system.
Install UFW and enable the service.
Check your repo sources (eg; /etc/apt/).
Check the /etc/passwd and /etc/shadow lists for any unusual accounts.
User the finger command to check on activity summaries.
Check /var/logs for unusual activity.
Use "ps -aux | grep TERM" or "ps -ef | grep TERM" to check for suspicious ongoing processes.
Check for failed sudo attempts with "grep "NOT in sudoers" /var/log/auth.log.
Check journalctl for system messages.
Check to make sure rsyslog is running with "sudo systemctl status rsyslog" (or "sudo service rsyslog status") and if it's not enable with "sudo systemctl enable rsyslog".
Perform an nmap scan on your machine/network.
Use netstat to check for unusual network activity.
Use various security apps to test you machine and network.
Change your config files for various services (ssh, apache2, etc) to non-standard configurations.
Disabled guest accounts.
Double up on ssh security by requiring both keys and passwords.
Check your package manager for any install suspicious apps (keyloggers, cleaners, etc).
Use Rootkit Scanners (chkrootkit, rkhunter).
Double SSH Security (Key + Password).
Disabled Guest Accounts.
Enabled Software Limiters (Fail2Ban, AppArmor).
Verify System Integrity via fsck.
Utilize ngrep/other networking apps to monitor traffic.
Utilize common honeypot software (endlessh).
Create new system-launch subroutines via crontab or shell scripts.
Ensure System Backups are Enabled (rsnapshot).
Check for suspicious kernel modules with "lsmod"
#linux#security#linuxsecurity#computersecurity#networking#networksecurity#opensource#open source#linux security#network#ubuntu#kali#parrot#debian#gentoo#redhat
175 notes
·
View notes
Text
(remaking the post because you can't edit polls and i gave the wrong options)
Alright I'm registering for classes and someone needs to talk me out of doing stupid shit but I'm unsure of what shit is stupidest.
Winter term:
I just finished an 8-week photo class that ended up being a huge investment in time just to go and shoot. However that was introductory photography and there is a 6-week intermediate photography class over winter term. I am signed up for photography. (Elective option for AA in Visual Arts)
There is also a 6-week introductory python course over winter term that I am signed up for and will be taking. I'm solid on that one, as long as I pass my C# class this term I'm going to be taking Python for 6 weeks at the beginning of the year.
Spring term:
College Chemistry Saturday class. 7am to 12pm for sixteen weeks. Lab and lecture; this school doesn't offer any chem classes that are after standard 9-5 hours during the regular week or that can be taken even partially online. Pretty sure I'm going to be stuck with this one and am configuring the rest of my schedule around being *less* miserable because of this class.
Survey of Western Art - Online, seems like a gimme. Does have a textbook but not one that I'm going to pay for. (Required for AA in Visual Arts)
2-Dimensional Design - Online, seems fun and like a gimme, Free/No textbook. (Required for AA in Visual Arts)
Object-Oriented Programming - Online, seems difficult, expensive textbook. Will probably be very necessary if I end up going down a more CS/tech path. Probably going to force myself to take this class.
Java Programming - Online, seems not unapproachable, expensive textbook. I don't particularly wanna but my school offers really limited options for computer science and I want to get what I can out of it before I go somewhere else.
Rationale for these weird combinations:
I'm applying as a nursing student at three schools and a biochem student at one of those schools (nutrition programs are apparently only for first-time students; 2nd Bachelor's applications are a lot more limited. I could apply to major in Francophone Studies at one of the schools though). Supposing I get accepted, these classes certainly won't hurt my status at any school that accepts me and the chemistry class is going to be really really necessary. This is the "i give a fuck about nutrition science and also directly helping people" path and if I go this way I'm interested in NP programs down the line. LOTS more school of the serious "I can't work and do this kind of school at the same time" variety.
If I *don't* get accepted to the programs I'm applying to, I'm going to go to a different community college and start working on a couple of AS degrees in computer junk (network admin and security management, computer and networking technology) and get some computer junk certs. I don't think I want/need a BS in compute science, this is the "practical" route of "I could finish this stuff pretty easily and continue working in a field where I have a lot of connections and familiarity with the industry but I am indifferent about a lot of it (pretty passionate about security and accessibility tho). Also allows me to keep working while I just churn school in the background, and all of the computer classes are transferable between the two schools.
Art classes: I think having multiple degrees is funney. I am currently 5 classes away from an AA in visual arts, at the end of this term I will be 4 classes away; if I take all the classes here and can take an elective over the summer I'll have a degree in visual arts. (There is a reasonable possibility that I'll continue taking bullshit classes behind the scenes to get silly degrees regardless of what happens otherwise)
Pretty sure the sensible thing is to drop *at least* photography and survey of western art and also possibly Java and 2D design. I'm somewhat concerned that if my spring term is just Saturday chem and object oriented programming I will start biting things.
So:
86 notes
·
View notes
Text
Well, hello word, @jv here.
This is the blog for throwing some news about Goblin, the fediverse-based tumblr clone I'm working on.
The idea is to develop an open-source platform that replicates some of the most peculiar intricacies of tumblr, that anyone can upload to a server, and become part of a federated network that works, in a way, as one. A tumblr-owned tumblr, if you will, that is much more resilient to financial woes than our current beloved hellsite.
None of the current platforms running on the fediverse offers an user experience close to tumblr. And more important: all of them lack the features that make the magic of tumblr happen: Reblogs, html posts, talking in the tags, etc.
So ... let's make one ourselves. The idea is to take one of the mastodon clones and, add the missing features and launch it to the world to use. For purely personal reasons (I know javascript/node much better than any other language) I have forked Firefish, which is itself a fork of Misskey.
The development is being done at https://github.com/johnHackworth/goblin, and yeah, Goblin is the working name of the project.
I have an instance running at http://goblin.band/ . It's closed for new users, and it's extremely unstable at the moment, barely anything more than a very badly configured firefish server, but if anyone wants to poke around let me know and I can allow you to register.
So what it's in the plans for a version 1.0 that I feel confident doing myself (though any help is also accepted, of course)?
[done] Add support for reblog chains
[done] Add support for html posts
[in progress] Change the default editor for a block editor that allows to add content without having to write HTML
Manage notifications (especially what happens when someone reblogs a reblog, which is not supported by Firefish)
Review all the UI to remove any firefish or misskey references, remove unused sections.
Add tumblr-style tag system
Review the UI and polish it a little bit
What's in the plans for that 1.0 that I have no idea how to do / I know I'm terrible doing it myself?
Find a way to package everything so it's easy to install on a server without having to manually install a bunch of dependences.
Actually make my goblin.band server ... a proper server. With HTTPS and all the fancy stuff, you know.
Figure out if this thing actually federates with other servers out of the box or if I have to do anything to make it happen.
Figure out what's best for file storage. Probably disallow uploading anything that's not images, but see what to do with uploaded files and such.
So, if anyone wants to help, as we devs say, Pull Requests accepted!
/cc @echo @analytik2
71 notes
·
View notes
Text
I desprately need someone to talk to about this
I've been working on a system to allow a genetic algorithm to create DNA code which can create self-organising organisms. Someone I know has created a very effective genetic algorithm which blows NEAT out of the water in my opinion. So, this algorithm is very good at using food values to determine which organisms to breed, how to breed them, and the multitude of different biologically inspired mutation mechanisms which allow for things like meta genes and meta-meta genes, and a whole other slew of things. I am building a translation system, basically a compiler on top of it, and designing an instruction set and genetic repair mechanisms to allow it to convert ANY hexadecimal string into a valid, operable program. I'm doing this by having an organism with, so far, 5 planned chromosomes. The first and second chromosome are the INITIAL STATE of a neural network. The number and configuration of input nodes, the number and configuration of output nodes, whatever code it needs for a fitness function, and the configuration and weights of the layers. This neural network is not used at all in the fitness evaluation of the organism, but purely something the organism itself can manage, train, and utilize how it sees fit.
The third is the complete code of the program which runs the organism. Its basically a list of ASM opcodes and arguments written in hexadecimal. It is comprised of codons which represent the different hexadecimal characters, as well as a start and stop codon. This program will be compiled into executable machine code using LLVM IR and a custom instruction set I've designed for the organisms to give them a turing complete programming language and some helper functions to make certain processes simpler to evolve. This includes messages between the organisms, reproduction methods, and all the methods necessary for the organisms to develop sight, hearing, and recieve various other inputs, and also to output audio, video, and various outputs like mouse, keyboard, or a gamepad output. The fourth is a blank slate, which the organism can evolve whatever data it wants. The first half will be the complete contents of the organisms ROM after the important information, and the second half will be the initial state of the organisms memory. This will likely be stored as base 64 of its hash and unfolded into binary on compilation.
The 5th chromosome is one I just came up with and I am very excited about, it will be a translation dictionary. It will be 512 individual codons exactly, with each codon pair being mapped between 00 and FF hex. When evaulating the hex of the other chromosomes, this dictionary will be used to determine the equivalent instruction of any given hex pair. When evolving, each hex pair in the 5th organism will be guaranteed to be a valid opcode in the instruction set by using modulus to constrain each pair to the 55 instructions currently available. This will allow an organism to evolve its own instruction distribution, and try to prevent random instructions which might be harmful or inneficient from springing up as often, and instead more often select for efficient or safer instructions.
#ai#technology#genetic algorithm#machine learning#programming#python#ideas#discussion#open source#FOSS#linux#linuxposting#musings#word vomit#random thoughts#rant
7 notes
·
View notes
Text
Dec. 28 (UPI) -- Chinese hackers called Salt Typhoon have infiltrated a ninth telecommunications firm, gaining access to information about millions of people, U.S. cybersecurity officials say.
The FBI is investigating the Salt Typhoon attacks, which are spurring new defensive measures, deputy U.S. national security adviser Anne Neuberger told reporters on Friday.
"As we look at China's compromise of now nine telecom companies, the first step is creating a defensible infrastructure," she said.
The hackers primarily are targeting individuals and organizations involved in political or governmental activities and a significant number of hacking victims are located in the Washington D.C.-Virginia area.
The hackers can geolocate millions of people in the United States, listen to their phone conversations and record them whenever they like, Politico reported.
Among recent victims are President-elect Donald Trump, Vice President-elect JD Vance and several Biden administration officials.
Neuberger did not name the nine telecommunications firms that have been hacked, but said telecommunications firms and others must do more to improve cybersecurity and protect individual customers.
"We wouldn't leave our homes, our offices unlocked," she said. "Yet, the private companies owning and operating our critical infrastructure often do not have the basic cybersecurity practices in place that would make our infrastructure riskier, costlier and harder for countries and criminals to attack."
She said companies need better management of configuration, better vulnerability management of networks and better work across the telecom sector to share information when incidents occur.
"However, we know that voluntary cybersecurity practices are inadequate to protect against China, Russia and Iran hacking our critical infrastructure," Neuberger said.
Australian and British officials already have enacted telecom regulations "because they recognize that the nation's secrets, the nation's economy relies on their telecommunications sector."
Neuberger said her British counterparts told her they would have detected and contained Salt Typhoon attacks faster and minimized their spread and impact.
"One of the most concerning and really troubling things we deal with is hacking of hospitals [and] hacking of healthcare data," Neuberger said. "We see Americans' sensitive healthcare data, sensitive mental health procedures [and] sensitive procedures being leaked on the dark web with the opportunity to blackmail individuals with that."
She said federal regulators are updating existing rules and implementing new ones to counteract the cyberattacks and threats from Salt Typhoon and others.
The Department of Justice on Friday issued a rule prohibiting or restricting certain types of data transactions with certain nations or individuals who might have an interest in that data.
The protected information includes those involving government-related data and bulk sensitive personal data of individuals that could pose an unacceptable risk to the nation's national security.
The Department of Health and Human Services likewise issued a proposed rule to improve cybersecurity and protect the nation's healthcare system against an increasing number of cyberattacks.
The proposed HHS rule would require health insurers, most healthcare providers and their business partners to improve cybersecurity protections for individuals' information that is protected by the Health Insurance Portability and Accountability Act of 1996.
"The increasing frequency and sophistication of cyberattacks in the healthcare sector pose a direct and significant threat to patient safety," HHS Deputy Secretary Andrea Palm said Friday.
"These attacks endanger patients by exposing vulnerabilities in our healthcare system, degrading patient trust, disrupting patient care, diverting patients and delaying medical procedures."
The proposed rule "is a vital step to ensuring that healthcare providers, patients and communities are not only better prepared to face a cyberattack but are also more secure and resilient," Palm added.
Neuberger estimated the cost to implement improved cybersecurity to thwart attacks by Salt Typhoon and others at $9 billion during the first year and $6 billion for years 2 through 5.
"The cost of not acting is not only high, it also endangers critical infrastructure and patient safety," she said, "and it carries other harmful consequences."
The average cost of a breach in healthcare was $10.1 million in 2023, but the cost is nearing $800 million from a breach of Change Healthcare last year.
Those costs include the costs of recovery and operations and, "frankly, in the cost to Americans' healthcare data and the operations of hospitals affected by it," Neuberger said.
The Federal Communications Commission also has scheduled a Jan. 15 vote on additional proposed rules to combat Salt Typhoon and other hackers.
6 notes
·
View notes
Text
Can Denji Survive These Horror Movie Villains?
So, a few days ago I watched Anthony Gramuglia's You Cannot Survive These Horror Movie Villains https://www.youtube.com/watch?v=JI7YOGb0e3Q I enjoyed it a lot. Very fun countdown of some of the most terrifying "you're so fucked" kinda villains in horror.
But a little worm of a thought crept into my mind these past few days "Could Denji take these villains?" What can I say? I love Chainsaw Man and given it's a horror-tinged work, I figured it'd be fun to match Denji up against these supposedly unstoppable forces. Spoiler Warning: This post is gonna do a lot of spoiling for Chainsaw Man and probably have spoilers for the works talked about in Ant's video. And Ant's video, really. If you're not wanting either of these, keep on scrolling. Also, watch Ant's video if you haven't. Is really good. Only rule I'm going with for this is that Denji is going at this alone. No Aki, no Power, no Asa, no Himeno, no Kishibe and certainly no Kobeni. This is Denji vs. 20 horror movie villains. With that said, let's roll.
20. Denji vs. Sadako Yamamura
Given the rules of the Ring curse, I don't think Denji is really capable of taking Sadako down. She has an extensive catalog of powers and can kill with a thought. At best, he could try befriending her if he meets her when she's nice, but I don't know for sure if Pochita's "give her lots of hugs" approach will be effective. Now, Samara Morgan, that's easy. Like, we don't even need to talk copy the tape and pass it on. The second he and Aki enter that video store, Denji's probably gonna spot that copy of Yor, the Hunter from the Future and demand they rent it. Completely ignore the the Ring tape, that's gonna be Aki's problem, not his. He's going into Yor's world!
19. Denji vs. Carrie White
Oh, this one's easy. Denji would beat Carrie with kindness. Both are outcasts with weird powers, so I can absolutely see Denji managing to befriend Carrie, giving her a support network that she needed when facing stuff like the school bullies or her awful mother. He'd probably ask Carrie to the prom before Tommy does. When the whole pig's blood prank goes down, Denji would probably rush the stage to drag Carrie off and get her out of there, even if he'd have to transform into Chainsaw Man to survive Carrie's early onslaught of telekinetic attacks. I dunno, I just picture the two of them outside the high school, Denji comforting a crying Carrie, licking the pig's blood off of her. It's a weirdly cute image.
18. Denji vs. It (aka Pennywise the Dancing Clown)
Despite Ant making the argument that soloing It wouldn't work, it absolutely could for Denji. See, Pennywise preys on fear. He likes to make sure his victims are nice and scared of him before he eats them. The problem here is that Denji is absolutely fearless. The dude went through absolute hell for so much of his life that a clown trying to eat him would probably be seen as a mild annoyance. Pennywise would be low diffed by Denji easy.
17. Denji vs. Lucy (Elfen Lied) Man, this is one where the "give her lots of hugs" strat really feels like a Hail Mary, doesn't it? Like, Chainsaw Man or not, I don't think Denji is gonna win a stand-up fight against the Diclonius Queen. Like, unless Denji can somehow kill her with kindness, he's pretty screwed here.
16. Denji vs. Pinhead
Well, first off, Denji is probably too stupid to work the Lament Configuration, so this is an easily skippable fight. Otherwise, while I think Denji would put up a hell (heh) of a fight, I just don't see him winning this under normal circumstances. Pinhead is immortal and has a high pain tolerance. The times he has been killed, he's revived himself in Hell. Now, I did note that this was a loss for Denji under normal circumstances. There is one trump card Denji has that I don't wanna pull out all willy-nilly, but in the case of Pinhead, I think we'll draw it here. Pochita. Resting within Denji is his closest friend, the Chainsaw Devil, Pochita. Literally, Pochita is inside of Denji, functioning as his heart. In very rare circumstances, Pochita has taken full control of Denji's body and reverted to his full, true form as the original Chainsaw Man. The Devil that other Devils fear. The Hero of Hell. True Form Chainsaw Man. (Note: 'True Form Chainsaw Man' is not an official title, just one I'm using for convenience.) Now, you might be wondering what the key difference here is and what turns this from loss to win. At least, if you haven't read CSM. If you have, you know what Pochita can do in his true form, but for those who need a refresher or just a crash course; basically Pochita can erase concepts from existence by eating their associated Devils. In Chainsaw Man canon, World War II and AIDs didn't happen because of Pochita.
I think the same ability can be applied to the villains Denji faces down in this list. Now, you may find yourself asking "Well, Mega, why not just have Denji turn into True Form Chainsaw Man and wipe out all the villains?" Well, my logic here is mainly because that's not something that's easily done in the manga. It's mostly an act that comes about when Denji is at his emotional lowest. After Makima had both Aki and Power killed. After Barem presented Nayuta's head to Denji on a platter (after already killing Meowy and the dogs on top of that.) True Form Chainsaw Man is not something Denji can just at-will transform into. It's basically Pochita taking the wheel because Denji is so emotionally devastated that he can't do anything.
So, I wanna save Pochita for fights where victory seems 100% hopeless. Fights where even kindness just isn't on the table as a way to victory. And I gotta say... don't think giving Pinhead a hug is gonna do anything.
So, Pochita takes over and after a grueling fight, devours Pinhead. It's a victory, but it's the hardest fight they've had so far.
15. Denji vs. Jason Voorhees Oh, easy. Jason may have a high body count and come back a lot, but he's still just a man compared to Chainsaw Man. Denji would tear Jason to ribbons. Wouldn't even need to get his new girlfriend Carrie involved in this, though she'd probably also low diff Jason. He's weak to Psychic, as shown in New Blood.
14. Denji vs. Nemesis
Eh, this is just Final Form Jason. It'd be high diff, but Denji can absolutely go the distance on this. It helps that Nemesis is usually hanging around a zombie-infested Raccoon City, so there's plenty of zombies for Denji to snack on to replenish his blood supply.
13. Denji vs. The Invisible Man
Weirdly, this is gonna be an unconventional victory for Denji. I mean, it'll probably just be him putting a chainsaw through Jack Griffin's solar plexus, that's the standard part. No, the thing that gets him the victory is just the simple act of transforming.
Ya see, when your transformation involved chainsaws sprouting from your forearms, it can get... messy. Blood going everywhere and if even a splotch of that blood hits Griffin, then that gives Denji a visible target to attack. Jack's going down easy.
12. Denji vs. The Djinn
Man... I think this one's a clean loss for ya, Denji. Like, this dude's a reality warper... don't touch the fire opal, I guess. That's all I got. There's no way he's winning here.
11. Denji vs. The Deadites Man, and you thought Ash Williams with his one chainsaw hand was bad for the Deadites...
10. Denji vs. Michael Myers
You remember all the stuff I said about Jason? Same applies here. Michael Myers may be pure evil, but he's still just a man. Thorn cult bullshit or no, you ram enough chainsaws into Michael and he's going down. Will he put up a fight? Sure. Will he win? No.
9. Denji vs. Art the Clown
Art's a new player on the scene and while Denji did good against Pennywise, I don't feel like I have enough of a grasp on Art's abilities to call this one. Call this one a no contest on the board, especially since I don't wanna play the Pochita card for this one.
8. Denji vs. Kayako and Toshio I'm not as familiar with Ju-On to know if there's anything that would make it so Denji could beat these kids and given that I've had him lose out against an Onryo already, best to keep things consistent. This one's a loss for him. 7. Denji vs. Xenomorph
I think Denji can take one Xenomorph, but it would be a brutal match-up. One of the advantages Denji has is that his opponents usually have blood, something that needs to maintain Chainsaw Man form. Problem is, Xenomorph blood is acidic, so not only does that mean he's getting hurt tearing into this thing, he can't even heal from those wounds. He could still down one of them, but put him up against a pack and he's screwed.
6. Denji vs. The Ghosts from Pulse Yep, that's another loss for Denji. Not much hope against the Japanese ghosts. 5. Denji vs. The Great Old Ones I... think he's got a fighting chance here. Like, this is a dude who in his own work is being squared up against the concepts of War and Death. We may be ants to the Great Old Ones, but Denji's more like a wasp. Sure, he could swatted down, but wasps have been known to kill humans. Put this under like, extremely high difficulty. Some real Dante Must Die shit.
4. Denji vs. Dracula
Most iterations of Dracula would fall to Denji. Like, OG Dracula was killed with Bowie knives, I think the dude with the chainsaw face has good odds. Hard to bite into a man's jugular when your own has a whirring cutting chain going through it. However, the big one Ant brings up is a total loss for Denji.
Alucard.
So, I brought up True Form Chainsaw Man and how it can erase things from existence by eating them... yeah, this is something that Alucard canonically can survive. The biggest trump card I had for Denji and Alucard can just say "Hah! No..." to that shit. This is a major loss for Denji, no way he can take on Alucard.
3. Denji vs. King Ghidorah
It's a hard battle, but Denji has some experience with fighting Kaiju. I put this at "has a fighting chance" along with the Great Old Ones.
2. Denji vs. Freddy Krueger
Oh, Freddy is the most screwed out of any of them. See, Freddy Krueger kills in your dreams. Guess who lives in Denji's dreams...

That's right. Good ol' Pochita. The second Freddy tries anything with Denji, he's gonna be facing down the Chainsaw Devil. The Devil other Devils fear.
So uh... good luck, Fred. Don't think those Dream Demons are gonna help you out there. They've probably bailed already. Denji doesn't even have to do anything, just sit back and let Pochita go full power Chainsaw Man. Freddy's gonna get eaten and wiped from existence easy. No more Bastard Son of a 1000 Maniacs. No more Springwood Slasher. No more Nightmare on Elm Street. He's just gone. 1. Denji vs. The Thing
And whereas #2 on the list was a no diff, I really have no idea how Denji could even begin to fight something like the Thing. It's a microscopic alien parasite that will get into his bloodstream and mutate him and Pochita from the inside. I don't even know if Power's hemokinetic abilities would even work on him and I can't even play that card since this has gotta be Denji soloing the opponent, but this one might be one where he just loses. Game over. Sorry, Denji. So, I'd say roughly half the list would be victories for Denji, two entries would be better odds than if he was a normal human, one entry is mostly wins except for one very OP iteration, about seven are straight up losses and one is a complete shrug on my part. Hope you enjoyed reading this long, LONG post of mine, be sure to check out Ant's original video up top if you haven't already and keep up the Halloween spirit, folks. God, I am gonna get some real angry reblogs and comments on this one...
#anthony gramuglia#chainsaw man#ringu#ju on#carrie 1976#hellraiser#the wishmaster#elfen lied#terrifier#friday the 13th#halloween#stephen king's it#pulse#resident evil#evil dead#the invisible man#the thing#hellsing#cthulu#godzilla#alien franchise#a nightmare on elm street#got enough fucking tags on this?
8 notes
·
View notes
Text
Who am I?
My name is Gabbie Ramos. I enjoy the outdoors and enjoy running on my free time. I chose to take this course as I wanted to gain more knowledge about the field. I have been around technology for majority of my life and have just grown to like anything tech. Entering my third semester, I have liked learning about the fundamentals and concepts of cybersecurity. With that being said, I do plan to pursue a career in that specific field.
My main interest right now would be cybersecurity, although I do enjoy learning networks as well. I find that I strive with configuration management in comparison with the other skills I have been learning. I definitely would love to learn more about the in's and out's of how security works in various types of industries. As for networks, I just would like to develop my overall understanding of the concepts that we are currently learning.
I do not have a deep understanding of what emerging technology means other than it describes the development of technology in modern society. So far into this course, I have learned emerging technology illustrates the different factors that one must consider to achieve a certain goal, whether it is the goal to improve existing technologies or to innovate new technologies.
9 notes
·
View notes