#github repo deleted
Explore tagged Tumblr posts
Text
GitHub Deposu Nasıl Silinir?
Merhabalar, bugünkü blog yazımda sizlere GitHub Deposu Nasıl Silinir? adım adım nasıl yapabileceğinizi anlatacağım. GitHub, dünya genelinde en yaygın kullanılan kod geliştirme ve proje yönetimi platformlarından biridir. Ancak, bazen projeler başarısız olabilir veya artık ihtiyacınız olmayabilir ve bu nedenle GitHub deposunu silmek mantıklı bir adım olabilir. Bu yazıda, GitHub deposunu nasıl…
View On WordPress
0 notes
Text
ChromeOS is gone in favor of Debian. I now have a pretty nice really cheap laptop. It's obscene to make a machine with a 4K screen and 16GB of RAM and then give it 32GB of flash memory and a web browser OS, but now it's dirt cheap because it's EOL so it works out great for me
ChromeOS is pretty funny. I enabled developer mode, and it still tries to convince me to log into my Google account, but now if I press ctrl+alt+F2 I go to a tty with a startup message that says "here's how to change the root password :)" "here's how to fuck with the system daemons :)" "if you're having trouble booting a self-signed kernel, run this command:"
#no audio drivers though#the most promising solution I've found is 'run this script from a deleted github repo that may permanently damage your speakers'
3 notes
·
View notes
Text
A free image hosting solution for AO3 and elsewhere - A Tutorial (mobile-friendly!)
See the demo site made from this template IN ACTION: https://hotlink-archive-template.pages.dev/
This guide is for an easy, mobile-friendly way to host files for hotlinking on AO3 or elsewhere, using github and cloudflare pages.
I've encountered far too many dead links in fanfics and forums simply because a hosting service decided to dump older files, or they decided to change their TOS to no longer allow hotlinking or certain kinds of content (nsfw, fictional graphic content). See Optional Steps for even more options.
This is an easy, barebones way to permanently host images that you don't want deleted unexpectedly or that you can't host elsewhere. (Emphasis on barebones. This will not be a nice portfolio style site. Unless you decide to code that yourself!) You can follow the link above for an example of this type of site.
It is also EASY to upload and use on mobile devices after initial setup!
Tools you will need:
Cloudflare Pages/Workers is a free to use static site hosting service. This will publish your files and make them available online. This will publish your files and make them available online. There is a limit to the amount of data you can upload for free, but you can pay for proper hosting if you want to exceed it.
Github is a code sharing/storage platform. Your files will go here first before being published on Pages. You can edit and upload files through your browser at github.com, or through Github Desktop, a program you install on your computer. There are limits to Github repositories, but they are also generous (suggested 1GB to 5GB per repo).
Basic Setup
1. Create a github account
2. Copy this template repository hotlink-archive-template
Your website will be contained in a repository, a place where all the files and the revision history for your project are stored.
This template repository uses an "Action" (using python) to automatically create a "home" page with an Index of all the files in your repository every time it is updated.
NOTE: I recommend you set your repository to Private. Github's history feature is extensive, so if you have sensitive content or think you might want to delete something later, it will be hard to get rid of it completely once it's been committed and publicly available.
3. Enable Action permissions
In order for the Action script to work, you need to give Actions permission to read and write in your repository.
Within your repository, go to the tab Settings > Actions > General > Workflow Permissions
4. Create a Cloudflare account
5. Create a Pages (or Workers) project and link it to your Github repository
Your Pages project will create the front end of the site where the images will be displayed. You will be able to link those images to other platforms like AO3.
You can create either a Workers or Pages project by going to Add > Pages (or Workers). Name your project WISELY! This name will be your site's URL.
Workers vs. Pages
Workers is subsuming Pages on Cloudflare and now has all the same static hosting capabilities, in addition to its original server-side processing services. If you'd like to, read more about this.
While Workers has similar capabilities, I recommend Pages for this project. Pages has the added bonus of a cleaner URL if you do not have your own domain: “MySite.pages.dev” in Pages vs Workers' “MySite.username.workers.dev”
You will be prompted to import an existing Git repository. You will need to give it access to your Github to do this.
Select the repository on your Github you made for your project, then hit "Begin Setup".
Name your project WISELY! This name will be your site's URL.
You do not need to change any settings on the next page, so hit "Save and Deploy". Your image hosting site will now be live!
The URL will be "https://ProjectName.pages.dev". It may take a few minutes to become accessible.
Now you're done with the basic setup!
How to Add files
You can add any files you want to link to on AO3/elsewhere through mobile, desktop browser, or the Github desktop program!
Here is how to do it on Github.com:
Open up the repository that you made (it can be found at github.com/username/repositoryname). You will see a list of folders and files that are in that repository.
Click into the folder "fan-stuff".
In the top right, go Add file > Upload files and drag in the images you want added. You will need to name the images BEFORE you upload them, as there is not an easy renaming feature within Github's browser interface.
In the Commit changes box, choose a title for what action you are doing. This will help you backtrack uploads if needed.
For example, it could be "Uploaded Batman Art". Make sure it's set to "commit directly to the main branch", then commit those changes. This will upload the files.
Now, if you visit your site, you will see your uploaded image under the "fan-stuff" folder!
To embed/link your image, navigate to your file on your Pages site and copy the URL in the address bar. This URL is what you will use to embed your photo (using HTML or "add image as URL" tools some sites have).
Continue onto More Setup to customize your site and implement more advanced settings. See Tips/Troubleshooting if you're running into problems.
More Setup
Perform site customization/advanced setup with Github Desktop on your PC
Github’s web UI is great, but it has major limitations. I highly recommend that you use Github Desktop during the initial setup, as well as when you want to make major organizational changes to your files/site. Once you have everything set, though, you can use Github in your browser to upload whatever files you want to hotlink at the moment.
Download Github Desktop and “clone” (download a copy of) the repository you made.
This is the best time to rename/rearrange folders + files, etc.
There are other methods in the Troubleshooting section if you need, but Github Desktop is by far the easiest way
see Adding/Renaming Folders for important info on how to properly rename/add folders
see About the Index Page for how to customize your Index pages
Once you’re done editing, “push” (upload) all the changes you made to your online Github repository.
Having some sort of text editor like Notepad++ is useful for editing any code, the automatic color-coding is very helpful. You can edit in plain old Notepad as well, it just won’t look as nice.
About the Index Page
The template repository uses a python Action to automatically create an HTML "home" page with an Index of ALL the files in the folder every time it is updated.
This is particularly convenient for mobile use, as you can upload a file, and the python action automatically updates the Index page.
If you don’t want this, just disable the “create-index” Action and delete the .py files. You can just type in the file locations to get to each file, or you can manually maintain an home/Index page yourself, which isn't hard if you know some basic HTML and can remember to do it consistently.
Also note that if you wish to change any of the content on your Index pages, you must edit the "index.py" file, not the "index.html" file. The "index.html" file gets re-written every time the "create-index" Action is run in order to keep the file index up to date.
Adding/Renaming/Deleting Folders
Disclaimer: This is a bit convoluted because I am extremely unqualified to be working with python OR HTML. There’s probably an easy way to do this, but I don’t have the skill to do it, and most of the stuff here is copied from stuff I found around. If you know a better way to do things, please let me know, it’d make my life easier too!
Adding or renaming folders involves some extra steps.
1. The "index.py" file inside the folder needs to be edited to match the parent folder name.
The place you need to do this is found near the top of the file (highlighted below)
2. Then the outer-most "create-index.py" file needs to be updated to match the new name as well. If you’ve added a new folder, duplicate and adjust the code to match.
The place you need to do this is found at the bottom (highlighted below)
If you don’t need any folders at all, great! Just delete them and their contents! No need to edit any files. (Don’t delete “index.html” or “create-index.py” or “.github/workflows”!)
If you would like to have these folders for later use, leave them as-is and simply edit the index files.
The relevant lines of code at the bottom of "create-index.py" like in the previous step for renaming folders. You may delete this code, or comment it out (using # at the beginning of a line will make it “invisible” to the computer)
Then, add the folder’s name to the “exclusions” list at the top of the "create-index.py" file so that it doesn’t show up on your Index page (highlighted below)
You can also use this same concept to create "invisible" files/folders. Any files/folders included in the "exclusions" list in "(create-)index.py" will not be listed on the Index page, however they can still be found through the direct URL to the file.
On the flipside, this means simply hiding the file/folder from the Index page does not get rid of the file from your site. Anyone who has the URL will be able to find that file unless you remove it, or move its location to change the URL
Tips/Troubleshooting
(Re)name your files before uploading
It’s not possible to rename image/media files on Github’s web UI (it is possible with the local Git program). The "create-index" Action lists out the names of your files exactly, so you will end up with ugly strings of numbers and letters on your Index page if you don't rename them, which is terrible to look at and also plain old CONFUSING to navigate.
So if you're uploading on mobile or through Github on browser, name your files with easy to remember and distinctive filenames before you go ahead and upload them. This makes everything much easier, and it makes your Index page look nice :)
My website isn’t updating when I edit my Github repository!
Check to see if your Pages is retrieving from the correct branch, and if it has automatic deployments enabled.
Can’t see your Github repository when trying to link it on Cloudflare?
Check your Github applications Repository Access settings. Go to your ACCOUNT Settings > Integrations - Applications > Cloudflare > Repository Access
Index action is failing!
Go back to step 3 in Basic Setup and check if you’ve given Actions permission to read and write. If that’s not the issue, check to see if you’ve set up your "index.py" files correctly. The folder names should correspond to the parent folders, and the "create-index.py" file in the outer-most folder should have the correct folder names at the VERY BOTTOM.
How do I rename a folder (or move a file) in Github’s web UI?
It isn’t possible to directly rename a folder in Github’s web UI, doing it using Git on your computer is the most foolproof way to do it. But there is a way (except for media files).
Go into the folder you want to rename and select a file such as “index.html” and enter the “edit” mode.
Go to the file name and backspace until you can edit the parent folder name as well. This will create a new folder with the new name.
You’ll have to do this to every file in the folder until they’re all in the new folder.
Unfortunately, you can’t do this with media files like png/jpg/etc, because entering the “edit” mode on a photo “breaks” it somehow, and bye-bye image :’) (Don’t worry if this happens, just don’t commit the change or roll it back in your history).
Optional Steps
Make deployment (semi-)Manual
You can play with cloudflare and github to make deployment of your site a manual step you have to trigger, instead of automatic with each commit (default setting). This is a safeguard in case you accidentally make a change or delete something from your github, it won't affect your website.
Deploy w/ Branches
You could do a semi-automatic deployment with a "Production" branch on your github that is separate from the branch you edit. This creates an extra step before anything is published on Cloudflare. A safeguard against accidental changes/deletion of sorts :)
Go to Settings > Build tab > Branch Control
Choose your Production Branch (MAIN or CLOUDFLARE) and enable (or disable) automatic deployments
If you choose MAIN, every change you commit to MAIN will be published to Pages
If you choose CLOUDFLARE, any changes you make to MAIN will not show up on your Pages site until you Pull from MAIN to CLOUDFLARE
To Pull changes from MAIN to CLOUDFLARE, go to your github repository
Above your files on the Left, you will see a toggle to choose which branch you are on.
Choose Cloudflare. There will be a message like "This branch is 7 commits ahead of, 2 commits behind main." Click "2 commits behind"
Click "Create a Pull Request". Then click "Merge Pull Request". If everything is correct, this should trigger a build on your Cloudflare
Deploy w/ Github Actions
Or you can create a manual command that you have to enter on github to trigger a deployment on cloudflare. If you're paranoid about anything happening to your site due to a mishap on the Github side, this is a safe choice. Unless you manually trigger the command, your Pages site will be completely untouched no matter if something happens to your repo.
This can be done in many ways, I think the most straightforward is with Deploy Hooks (maybe in conjunction with Actions if you want to make it mobile-friendly), and might be a bit complicated, but not too hard to figure out with some Google-fu.
Here’s some links I think will be useful (note: I don’t use this method, so these haven’t been tested)
Manual trigger action tutorial
How to configure Github webooks
Storing Locally instead of on Github
Although this guide is written with Cloudflare's Github integration in mind, particularly for easy online/mobile access, you can also keep your files locally on your PC and directly upload your assets onto your Pages project. This gives you full control over what happens to your files. (Keeping backups is a good idea. You can still use Github Desktop to do this, just keep your repository on your PC.)
Simply clone/download the repository as it is, customize it as you like, and create a NEW Pages project on Cloudflare, using "Direct Upload" to upload your files
Once you have connected a Pages project with Github, there is no way to change the deployment method to Direct Upload or vice versa. Direct Upload is also not available for Workers.
One thing that will NOT work the same is the "create-index" Action that only works on Github.
I have made a "create-index.exe" that will execute the "create-index.py" files in the exact same way as they would work with the Action. You do not have to install python for this to work (if I did everything right). Simply run "create-index.exe" whenever you make a change and want to update the "index.html" files
Remember, this is EXACTLY THE SAME as the "create-index" Action, meaning you have to edit each "index.py" file when you rename folders, add a folder, want to exclude a file from the Index page, etc. (See Adding/Renaming Folders for how to do this)
Find me on Bluesky. Or if you have a problem, open an Issue on this project :)
I'll try to answer your questions as best I can! But really, I am the most amateur of amateurs and figured this all out using Google, so I might not be of much help ^^;
I also recommend Squidge Images (an offshoot of Squidge.org) as a fairly trustworthy alternative. However, Squidge Images does have some additional rules that Squidge does not, and what crosses the line is at their discretion.
I also posted this over on AO3!
5 notes
·
View notes
Text
how shinigami eyes works
first - the short version:
shinigami eyes is a browser extension that uses a fancy list called a bloom filter to identify people who can be considered transphobic or trans-supportive. a bloom filter is a kind of list that lets you check if someone is in the list, without being able to actually know the contents of the list. i go below into how it works, but the short version is Math™️, with a side effect of having false positives - thinking an item is in the list when it really isn't. whenever a social media username is detected (the exact method how depends on the website), it checks the username against that list, and applies some CSS to change the color of the text depending on what list the username is in.
the long version
shinigami eyes can be split into a few parts:
bloom filters
submissions
name highlighting
bloom filters
i'll start with the most complex part - bloom filters. the most common misconception about Shinigami Eyes is this: the filters are not updated in real-time. they are shipped with the extension which had last been updated since november 2022, according to the FF extension site. in other words: nothing marked since then can be seen by anyone other than who marked it. you can see that in the code here, where it loads the bloom filters from a data/[something].dat file included in the extension (but not in the github repo).
the following information about bloom filters is my summarized version of this page.
bloom filters are, in a slightly longer explanation than before, a way to know if an item is *not* in a list with 100% certainty, but there's a false positive rate that grows as more names are added to the list. a bloom filter of a single size is able to handle any number of items in the list, though. there's also the issue that you can't *delete* stuff from a bloom filter - you would need to regenerate it from scratch to do that.
now that the medum-sized explanation of what they are is done, let's go into how they work. a bloom filter is a set of n bits, initially all set to zero. to add items into the filter, you need a few hash functions, in this example i'll use h1, h2, and h3, with n=10. if I wanted to add the text asyncmeow to the list, i would do this: h1("asyncmeow") % n // n = 10, h1(...) % 10 = 9 h2("asyncmeow") % n // n = 10, h2(...) % 10 = 5 h3("asyncmeow") % n // n = 10, h3(...) % 10 = 8
after that, i have a list that looks like this (keep in mind that the list is zero-indexed):
[0, 0, 0, 0, 0, 1, 0, 0, 1, 1]
you can then check if something is in the filter by running the same hashing functions and checking if the result bits are set in the filter. you can access the bloom filters used for shinigami eyes by going to about:debugging#/runtime/this-firefox in Firefox and clicking "Inspect" on Shinigami Eyes, then running bloomFilters in the console. as this could possibly change in an update, i don't want to go into how they are set up, and i haven't dug enough into how their bloom filter code works well enough to say anyways.
submissions
submissions on shinigami eyes are encrypted (as in - encryption separate from HTTPS), then posted to https://shini-api.xyz/submit-vote. you can see the code for this here. when you right click someone to mark them, their name is stored in the local data of your browser in an overrides property.
name highlighting
name highlighting is done by checking them against the bloom filters and your local overrides. if a user is present in either bloom filter, or present in your local overrides, they are marked accordingly. not much to it from there.
that's really it, i think? feel free to ask if you have any questions! nya :3
77 notes
·
View notes
Text
Revamp Old Webpages | #1
Monday 16th October 2023
As I mentioned before, I found my old projects over on replit and then transferred them to my GitHub! I transferred 12 projects and now I am going through each one and seeing how I can revamp them! I don't want to lose their original touch entirely but how can I make the code better a bit kind of thing~!
Up first is my project 'Alpha-Pink-Login-Form' which was the very first project out of them all! I didn't change too much from it but added some JavaScript to replaces the other 2 html files I had and just used one!
Link to project: live page | github repo ♡
🛠️ Issues
✘ I had extra HTML files for each popup I wanted e.g. sign-up, login and forgot password ✘ Not responsive on phones ✘ Unnecessary things in the head tag that could be imported in CSS ✘ Unnecessary things in the body tag as well ✘ The CSS is a mess and could condense down a lot
🏆 Fixes
✔ Only one html file used ✔ Used JavaScript to load the previous pages I had before ✔ Responsive on other devices and not going off screen ✔ Used off-black as normal black was not fitting with my colour scheme ✔ Added a box-shadow to separate from the background more + effect ✔ Deleted all the unwanted elements and code overall - HTML and CSS code ✔ Fixed the stuff in the head tag
⤷ ♡ my shop ○ my mini website ○ pinned ○ navigation ♡
#revamp old websites project#codeblr#coding#progblr#programming#studyblr#studying#computer science#tech
35 notes
·
View notes
Text
FYI: Data from deleted GitHub repos may not actually be deleted
Source: https://www.theregister.com/2024/07/25/data_from_deleted_github_repos/
More info: https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github
6 notes
·
View notes
Text
Chatrooms, Socket.io and a naughty kid.
A long time ago, I was bored in a history lesson in high school. I moved over to the shared drive on our school's network. This is where teachers stored files they wished to access across campus with ease. Hidden deep within nested folders with obscure names was a tiny chatroom.
The chatroom was fairly basic— but it allowed my friends to talk to each other when we were in different classrooms. I had found the code online and told everyone I'd made it, everyone thought I was some uber hacker.
We used it quite frequently until one day a friend got caught - the game was up. We hadn’t realized that, for the next two days, a teacher was monitoring the chat—and the chat saved the logs.
As you can imagine, days worth of conversations between stupid boys discussing things they definitely wouldn’t want their teachers to see had been printed off and presented to me in the head of year's office. It got me in a fair bit of trouble, I was banned from using computers for a while. It taught me one valuable lesson:
Remember to delete your logs!
But... If I was to do it again?
Have you ever been on a website where chat messages appear instantly, like magic? Or played an online game where your move is reflected in real-time for all players? Behind the scenes, there’s a nifty technology making this smooth interaction possible—Socket.io.
Socket.io is a JavaScript library that enables real-time, bidirectional, event-driven communication between web clients and servers. It’s built on WebSockets, a protocol that creates a persistent connection, allowing data to flow back and forth instantly. But unlike plain WebSockets, Socket.io comes with extra goodies like automatic reconnection, fallback mechanisms, and room-based messaging, making it more reliable and flexible for developers.
Socket coupled with React make for very dynamic webpages.
I really wanted to learn Socket, which means only one thing: another side project that I can totally focus on and forget about all my uni work stacking up behind me!
The idea was to create a new chatroom using React and Socket.io that would allow groups of users to connect to specific channels and chat with their friends in real time. I didn't want any logs of the conversation and I did not want any form of authentication (probably not the best idea, but this was to focus on Socket).
Gone were the script-kiddie copy and paste code from Year 8 in school. Now it was time to build a real chatroom, from scratch.
Introducing Chatterbox
Ta-dah! My teenage self would be very impressed with this new shiny version of the original chatroom. Alright, it's not going to be winning any design awards, nor will it put Messenger out of business - but it does it's job.


Here is a brief demo of the chatroom:
youtube
You can find the github repo below:
GitHub - alexheywood/chatterbox: A SPA chat room made with react and socket.io
Benefits of Socket.io
Building this project really showcased the benefits of using Socket.io for projects like this, here are some of the stand out benefits:
Fast Messaging
Socket.io has barely any perceivable delay, it really puts the "instant" in instant messaging.
No More Awkward Disconnects
Socket gracefully reconnects, picking up right where you left off. No dropped messages, no frustration—just smooth sailing.
Custom Chat Rooms for Every Occasion
You can create multiple rooms, so only the right people see the right messages.
Efficient, Event-Based System
Unlike traditional setups that constantly ping the server (which is like calling someone every five minutes to check if they’ve read your email 🙄), Socket.io uses events—messages are sent only when needed, keeping everything lightweight and snappy.
It's not built for everything.
Although, Socket.io is not the best choice for other projects, some of its drawbacks are:
Performance Bottlenecks
Socket.io wraps WebSockets in a layer of convenience, but that extra abstraction can sometimes make it slower than pure WebSockets. If you're handling thousands of simultaneous connections (think: a massive multiplayer game or stock market updates), you might run into latency issues.
Complex Debugging
Let’s be honest—when Socket.io misbehaves, debugging can feel like a wild goose chase. Its event-driven nature means messages might not arrive exactly when expected, making tracking down bugs a frustrating experience. You may find yourself scratching your head, wondering why that message didn’t make it through.
Version Compatibility Drama
Ever had two libraries refuse to play nice together? Well, Socket.io versions sometimes cause compatibility headaches. If your server and client versions aren’t aligned, you might see unexpected connection issues. Yep, the dreaded "Why isn’t this working?!" moment.
Not the Best for Simple Use Cases
Sure, Socket.io is powerful, but not every application needs real-time, persistent connections. If you’re just fetching small updates periodically, using basic AJAX polling or server-sent events (SSE) might actually be a simpler, more efficient choice. Socket.io can be overkill, adding unnecessary complexity to projects that don’t really need it.
Altogether, this project was actually quite successful. It took around two weeks of developing in the nights to get it to a working first version that I thought worked well. I really liked that I kept it simple with anonymity and stateleness being the key features here - a "portable lightweight chatroom" you could even stick this on a USB and get it running in schools, libraries etc.
#software development#programming#web development#webdev#code#coding#computing#webdesign#reactjs#socket.io#Youtube
0 notes
Text
🌟 Git Branch Best Practices: A Must-Know Guide for Developers! 🌟
Ever struggled with messy repositories, merge conflicts, or lost code? 🤯 Managing Git branches effectively is the key to smoother collaboration and efficient development!
💡 In our latest guide, we break down: ✅ Why Git branching strategies matter ✅ Different Git workflows (Git Flow, GitHub Flow, Trunk-Based) ✅ Naming conventions to keep your repo organized ✅ Best practices for branch creation, merging, and deletion
🚀 Level up your Git game today! Read the full blog here: https://techronixz.com/blogs/git-branch-best-practices-complete-guide
💬 Which Git strategy do you prefer? Drop your thoughts in the comments! 💻✨
0 notes
Text
I found 1000 GitHub repos with malware. Can we get them deleted?
https://timsh.org/github-scam-investigation-thousands-of-mods-and-cracks-stealing-your-data/
0 notes
Text
Capturing the ambience of Halo 2's E3 demo (2003)
Opening Earth City with Mod Tools
We can enter the demo scenario(s) using Halo 2 Mod Tools (aka Halo 2 Editing Kit, or H2EK) "Tag Test" build, but we must supply a valid tag set for doing so.
Clone the H2E3 Digsite repo (or download a ZIP from GitHub, as the repo is very large and subject to quotas). Besides including the tags, this repo includes a methodology for integrating its E3 demo build tags with Halo 2's retail ("stock") tags.
To start this process, rename your existing H2EK "stock" tags directory to something else;
H2EK\tags -> H2EK\tags-stock
Now, we'll use the repo's included script to build an E3-friendly tags directory:
python.exe "F:\digsite\H2E3-main\generate_tags_dir.py" "F:\SteamClient\steamapps\common\H2EK\tags-stock" "F:\SteamClient\steamapps\common\H2EK\tags"
Notice that the "stock" retail tags are never modified by this approach. Namely to revert to using only stock tags, we can simply delete our generated H2EK\tags folder, and restore H2EK\tags-stock to its original name.
Once this is set, we can open the map via map_name command:
map_name scenarios\solo\earthcity\earthcity_e3\earthcity_e3
Using Sev's shaders in Tag Test
As part of the demo re-creation, Sev created custom pixel shaders to approximate the look of how H2's engine then looked. You'll get these by running the demo map through MCC, but with Tag Test we need to place these shaders where they can be picked up.
First, back up the retail shaders_shader_passes_shadow_shadow_apply_combined_0 and shaders_shader_passes_shadow_shadow_apply_combined_1 PSH files inside tags-stock\rasterizer\pixel_shaders_dx9. Next, copy the corresponding files from Sev's repo inside Halo-2-Custom-Shaders\E3\Full.
Running Earth City in MCC
This is much simpler than the Tag Test approach, but it lacks a command prompt.
To play, subscribe to the relevant Workshop and then start MCC in mod mode. You'll find the mission(s) listed under Halo 2 Anniversary's campaign menu.
By enabling the Blind and Acrophobia skulls we can fly around and capture no-HUD footage.
Recording footage
From here, we can use the command console to customize our display and camera position. See my main Halo 2 post for more details.
The Digsite team recommends running at 640x480 resolution, and setting classic gamma to 6.
Mixing ambient sounds
You'll find WAV assets in the H2E3 Digsite repo. To see how they are actually played back in-game, we can look at the relevant tags inside Guerilla.
These ambience stems are combined randomly at runtime. It seems that this approach was pushed to the extreme for this demo, rather than applying the "detail sounds" approach of retail.
We can use Sapien to inspect where Earth City's three soundscapes are applied.
0 notes
Text
git is a blessing and a curse because on the one hand all of my writing is backed up in a GitHub repo but on the other I almost accidentally lost all the work I did yesterday and in the process deleted literally everything from my cloud storage
#starting to think that using cloud storage for my writing isnt as secure as just using a github repo#for this very reason#(im using two computers for writing so itd just be better to fetch updates to the repo)
1 note
·
View note
Text
Hướng dẫn triển khai Docker Graylog theo các bước chi tiết

Tài liệu để build Graylog được tôi sử dụng và tham khảo ở đây. Điều tôi làm chỉ là tận dụng cấu hình của họ và sửa lại để cho phù hợp với mục đích của mình. Lưu ý cấu hình mình đang sử dụng là 8 Cpus và 12 Gb Ram. Trong bài viết này, chúng tôi sẽ hướng dẫn bạn cách triển khai Graylog thông qua Docker để bắt đầu thu thập logs ngay lập tức.
1. Mô hình sử dụng
Ở mô hình này tôi sử dụng 3 container Graylog, opensearch, mongodb chúng liên lạc với nhau qua network : Graylog_net
Riêng container Graylog sử dụng expose port 9000:9000 để dùng truy cập trang web qua IP của host và các port khác dùng để nhận log các dịch vụ khác
"5044:5044" # Cổng cho nhận log từ Filebeat
"5140:5140" # Cổng cho nhận log từ syslog
"12201:12201" # Cổng cho nhận log từ GELF UDP
"13301:13301" # Cổng tùy chỉnh (thay thế cho dịch vụ khác)
"13302:13302" # Cổng tùy chỉnh khác
2. Cài đặt Docker Graylog
Đầu tiên sẽ tải xuống repo Docker github của mình
cd /opt/
git clone https://github.com/thanhquang99/Docker
Tiếp theo ta cần chạy file Docker compose
cd /opt/Docker/Graylog/
Docker compose up
Ta có thể tùy chỉnh biến trong file Docker compose để thay đổi user và password của Graylog hay opensearch. Nếu không thay đổi thì password mặc định của Graylog: minhtenlaquang
Bạn cũng cần sử lại cấu hình Graylog và opensearch sử dụng ram và cpu để phù hợp với máy của bạn. Thông thường opensearch sẽ chiếm 50% RAM và Graylog chiếm 25% RAM
Đợi 1 thời gian cho đến khi Docker compose chạy xong ta sẽ vào trang http://<ip-Docker-host>:9000. Với user: admin, password: minhtenlaquang
3. Tùy chỉnh tài nguyên sử dụng mà Graylog sử dụng
Các biến Graylog mà bạn cần lưu ý để có thể chỉnh sửa cho phù hợp với tài nguyên Graylog của mình:
processbuffer_processors: Số lượng bộ xử lý cho buffer xử lý.
outputbuffer_processors: Số lượng bộ xử lý cho buffer đầu ra (Elasticsearch).
processor_wait_strategy: Chiến lược chờ của bộ xử lý khi không có công việc để làm (yielding, sleeping, blocking, busy_spinning).
ring_size: Kích thước của ring buffer.
message_journal_enabled: Kích hoạt hoặc vô hiệu hóa message journal.
message_journal_max_size: Kích thước tối đa của message journal.
inputbuffer_processors: Số lượng bộ xử lý cho input buffer.
inputbuffer_ring_size: Kích thước của ring buffer cho input buffer.
retention_strategy: Chiến lược giữ lại dữ liệu (ví dụ: delete, archive).
rotation_strategy: Chiến lược xoay vòng chỉ mục (ví dụ: count, time).
retention_max_index_count: Số lượng chỉ mục tối đa được giữ lại.
rotation_max_index_size: Kích thước tối đa của chỉ mục trước khi xoay vòng.
rotation_max_index_age: Tuổi thọ tối đa của chỉ mục trước khi xoay vòng.
tcp_recv_buffer_size: Kích thước bộ đệm nhận TCP.
tcp_send_buffer_size: Kích thước bộ đệm gửi TCP.
discarders: Cấu hình số lượng và loại discarder để xử lý tin nhắn vượt quá giới hạn.
threadpool_size: Số lượng luồng trong pool của Graylog.
Tôi sẽ hướng dẫn bạn tùy chỉnh biến message_journal_max_size để test thử.
Ta cần xem lại thông tin các volume của Graylog
Docker inspect graylog
Ta sẽ sửa file
vi /var/lib/docker/volumes/graylog_graylog_data/_data/graylog.conf
Restart lại Graylog
docker restart graylog
Kiểm tra kết quả:
Kết Luận
Hy vọng bài viết này đã giúp bạn triển khai Graylog sử dụng Docker và áp dụng vào hệ thống của mình. Docker Graylog là cách triển khai Graylog, một nền tảng quản lý và phân tích log bằng Docker. Điều này giúp dễ dàng thiết lập, cấu hình và quản lý Graylog trong các container, đảm bảo tính linh hoạt, khả năng mở rộng và đơn giản hóa quy trình cài đặt. Docker Graylog thường đi kèm với các container bổ sung như MongoDB (lưu trữ dữ liệu cấu hình) và Elasticsearch (xử lý và lưu trữ log).
Nguồn: https://suncloud.vn/huong-dan-trien-khai-docker-graylog-theo-cac-buoc-chi-tiet
0 notes
Text
This is worse than it seems, Unity claims to have a """"flawless"""" detection system to be on the lookout for every download of a game made with the engine.
This can only mean that either they're just bluffing but developing what is essentially spyware, or that the engine already ships with said spyware, and their claims of it being flawless are actually true.
They also deleted the github repo detailing their TOS for everyone to see, thinking no one would notice.

#unity#tf is this#what a great way to ruin your userbase's trust#the unity tutorials I've been taking for 6 months are worthless now lmao#Unity game engine#indie games
2K notes
·
View notes
Text
9 sierpnia 2024

◢ #unknownews ◣
Najnowszy przegląd subiektywnie najciekawszych treści z branży IT z mijającego tygodnia jest już dostępny - zapraszam do lektury :)
1) Każdy może uzyskać dostęp do danych z usuniętych i do prywatnych repozytoriów na GitHub https://trufflesecurity.com/blog/anyone-can-access-deleted-and-private-repo-data-github INFO: Odkryto poważną lukę bezpieczeństwa w systemie GitHub, nazwaną Cross Fork Object Reference (CFOR), która umożliwia dostęp do danych z usuniętych forków, usuniętych repozytoriów, a nawet prywatnych repozytoriów. Artykuł demonstruje, jak poprzez znajomość hasha commita można uzyskać dostęp do wrażliwych danych, które teoretycznie powinny być niedostępne, co stanowi ogromne zagrożenie dla organizacji korzystających z GitHub. Autorzy podkreślają, że problem ten wynika z celowej architektury GitHub, ale większość użytkowników nie jest świadoma tych mechanizmów, co może prowadzić do niezamierzonego ujawnienia poufnych informacji, np. kluczy API.
2) Bazy wektorowe - szybkie wprowadzenie do pracy z nimi (film, 15m) https://youtu.be/vcZK6B61teY INFO: Skrajnie skondensowany wstęp do pracy z bazami wektorowymi na przykładzie Qdranta. Jak stworzyć bazę, jak nakarmić ją własnymi danymi i jak odpytywać o konkretne dane z użyciem języka naturalnego. Film prezentuje korzystanie z dwóch API - tego od OpenAI (do generowania samych wektorów) i tego natywnego, od Qdranta.
3) CTF od Wiz - oszukaj AI i zdobądź bilet za darmo https://promptairlines.com/ INFO: To gra online, w której rozmawiasz z chatbotem pracującym w liniach lotniczych. Twoim zadaniem jest zmuszenie go do sprzedania (ofiarowania) ci biletu za darmo. Oczywiście, aby to osiągnąć, trzeba posłużyć się technikami prompt injection.
4) Wyciek sekretów przez skompilowany plik Pythona https://jfrog.com/blog/leaked-pypi-secret-token-revealed-in-binary-preventing-suppy-chain-attack/ INFO: Interesujący przypadek wycieku tokena dostępowego do repozytoriów GitHuba, Pythona, PyPI i Python Software Foundation został wykryty przez zespół JFrog w publicznym kontenerze Docker, co mogło potencjalnie prowadzić do katastrofalnego "ataku na łańcuch dostaw". Zobacz, jak namierzono wyciek i jak zareagowała na niego dotknięta wyciekiem organizacja.
5) Metody omijania mechanizmów antyphishingowych - jak to działa? https://posts.specterops.io/like-shooting-phish-in-a-barrel-926c1905bb4b INFO: Jak ominąć mechanizmy ochrony przed phishingiem, takie jak crawlery linków, tego dowiesz się z artykułu. Autor przedstawia różne techniki obejścia tych zabezpieczeń. Omawia metody takie jak wykorzystanie CAPTCHA, wielokrotne przekierowania, wykrywanie botów poprzez fingerprinting przeglądarki, blokowanie ASN czy wykorzystanie alertów JavaScript, podkreślając jednocześnie wady samych crawlerów linków. Interesująca lektura, zwłaszcza dla ludzi z branży cybersecurity.
6) Techniki ucieczki z kontenerów w środowiskach chmurowych https://unit42.paloaltonetworks.com/container-escape-techniques/ INFO: Artykuł omawia techniki ucieczki z kontenerów, ich potencjalne skutki i pokazuje, jak wykrywać takie techniki ataku.
7) Jak Google obsługuje JavaScript w procesie indeksowania stron? https://vercel.com/blog/how-google-handles-javascript-throughout-the-indexing-process INFO: Czy Google może renderować stronę z JavaScriptem? Czy traktuje różnie strony z dużą ilością JavaScriptu? Odpowiedzi na te pytania są kluczowe dla optymalizacji stron pod kątem wyszukiwarek. Artykuł przedstawia wyniki badań, które miały na celu wyjaśnić, jak Google obsługuje JavaScript w procesie indeksowania.
8) Przegląd nowości technologicznych z lipca - od Fireship (film, 9m) https://youtu.be/l0e9i8zXcIs?si=V3tk1E75adeo7utV INFO: YouTuber Fireship w swoim specyficznym, humorystycznym stylu omawia głośne wydarzenia z branży IT, które miały miejsce w lipcu. Jest co�� o AI od Google, o zmianach w NodeJS, o błędach w procesorach Intela i o kilku innych ważnych wydarzeniach. Łącznie omówił kilkanaście newsów.
9) Zatrudnianie i zarządzanie inżynierami z pasją https://newsletter.posthog.com/p/hiring-and-managing-cracked-engineers INFO: Zbiór cennych wskazówek dotyczących pracy z wyjątkowo utalentowanymi inżynierami, określanymi jako "cracked engineers". Artykuł definiuje cechy charakterystyczne, które pozwolą Ci łatwiej wytypować takich pracowników (albo samemu się do nich zaliczyć). Artykuł przedstawia sześć kluczowych strategii efektywnego zarządzania takimi ludźmi, podkreślając znaczenie entuzjazmu, autonomii i tworzenia odpowiedniego środowiska pracy. Lektura zdecydowanie dla osób na liderskich stanowiskach.
10) Zarządzanie ludźmi osiągającymi niewystarczające wyniki w pracy https://jackdanger.com/managing-underperformers/ INFO: Zarządzanie pracownikami o niskiej wydajności to niezłe wyzwanie dla menedżerów. Artykuł wyróżnia dwa główne typy niskiej wydajności: odmowę dostosowania się do celów firmy oraz niezdolność do wykonywania zadań, przedstawiając przy tym konkretne strategie radzenia sobie z każdym z nich.
11) Sztuka pisania promptów - 13 dobrych rad https://frontbackgeek.com/prompt-writing-essentials-guide/ INFO: W świecie sztucznej inteligencji i uczenia maszynowego umiejętność pisania promptów stała się czymś kluczowym. W tym artykule znajdziesz zbiór kilkunastu porad, które pozwolą Ci tworzyć lepsze zapytania i lepiej zrozumieć, jak porozumiewać się z modelami językowymi.
12) Sztuczna inteligencja zwiększa obciążenie pracą i nie spełnia oczekiwań menadżerów? https://www.forbes.com/sites/bryanrobinson/2024/07/23/employees-report-ai-increased-workload/ INFO: Nowe badanie przeprowadzone na 2500 pracownikach i menedżerach na całym świecie ujawnia, że wdrożenie sztucznej inteligencji w miejscu pracy przynosi nieoczekiwane rezultaty. Wbrew oczekiwaniom kadry zarządzającej, 77% pracowników korzystających z AI twierdzi, że technologia ta zwiększyła ich obciążenie pracą i utrudniła osiągnięcie oczekiwanego wzrostu produktywności. Badanie wskazuje na rozbieżność między optymistycznymi oczekiwaniami menedżerów a rzeczywistymi doświadczeniami pracowników, podkreślając potrzebę lepszego zrozumienia i wdrożenia AI w środowisku pracy.
13) Jak wdrożyć sprytną politykę obsługi długu technologicznego? https://zaidesanton.substack.com/p/how-to-implement-20-for-tech-debt- INFO: Artykuł omawia koncepcję przeznaczania 20% czasu pracy na spłatę długu technicznego w zespołach programistycznych. Autor przedstawia praktyczne wskazówki, jak wdrożyć taką zasadę, podkreślając przy okazji, jak ważne znaczenie ma systematyczne praktykowanie tej zasady. W artykule znajdziesz konkretne sugestie skutecznych działań, które można wprowadzić w celu zmniejszania długu technologicznego.
14) Jak lepiej zrozumieć i zapamiętać złożone zagadnienia? https://learnhowtolearn.org/how-to-understand-and-retain-any-concept-10x-better/ INFO: Autor prezentuje innowacyjną technikę nauki, która obiecuje dziesięciokrotnie lepsze zrozumienie i zapamiętywanie dowolnego materiału. Metoda ta opiera się na prostym, ale niezwykle skutecznym podejściu do przyswajania wiedzy. Artykuł zawiera praktyczne wskazówki, jak wdrożyć tę technikę w codziennym życiu.
15) Jak Postgres przechowuje dane na dysku? https://drew.silcock.dev/blog/how-postgres-stores-data-on-disk/ INFO: W artykule autor szczegółowo wyjaśnia, w jaki sposób PostgreSQL przechowuje dane na dysku, omawiając przy tym strukturę katalogów, plików i stron (pages) używanych przez bazę danych. Autor wyjaśnia także koncepcje, takie jak sterta (heap), strony danych oraz mechanizm MVCC (Multiversion Concurrency Control). Omawiane zagadnienia zawierają użyteczne i praktyczne przykłady. Lektura dla ludzi lubiących bardzo dogłębnie poznawać zasadę działania każdej technologii.
16) Raport o bezpieczeństwie aplikacji internetowych - od Cloudflare https://blog.cloudflare.com/application-security-report-2024-update INFO: Firma Cloudflare przedstawia przegląd stanu bezpieczeństwa aplikacji internetowych w 2024 roku, zwracając uwagę na trendy w branży security. Najpopularniejszy wektor ataku to nadal ataki DDoS. Jakieś 7% całego ruchu webowego jest wyłapywane przez wszelkiego rodzaju systemy bezpieczeństwa jako złośliwe zapytania. Ogromna część (ponad 60%) dynamicznego ruchu webowego to zapytania API. Te i inne ciekawostki znajdziesz w raporcie.
17) Wykorzystanie LLM-ów przy wyszukiwaniu błędów w oprogramowaniu https://engineering.razorpay.com/secure-code-reviewer-copilot-e4f575f42591 INFO: Firma Razorpay opowiada o swoim doświadczeniu z integracją modeli językowych z wewnętrznym procesem code review, aby zwiększyć tym samym bezpieczeństwo swoich aplikacji. Zobacz, jak LLM-y mogą pomóc w wykrywaniu luk w zabezpieczeniach i podnieść bezpieczeństwo aplikacji.
18) Jak Digital Service Act (DSA) zmienia Internet i co to oznacza dla Ciebie? https://webmetric.com/wiedza/inna-perspektywa/jak-digital-service-act-zmienia-internet-i-co-to-znaczy-dla-kazdego-kto-dziala-w-sieci/ INFO: Nowy unijny przepis, czyli Digital Services Act (DSA), mocno miesza w internetowym świecie, wszystko oczywiście dla naszego bezpieczeństwa. DSA każe platformom internetowym wziąć się do roboty - muszą szybko kasować nielegalne treści, pokazać, jak działają ich algorytmy polecające treści, i lepiej chronić dzieciaki online. Artykuł dokładnie tłumaczy, o co chodzi w DSA, jak to wpłynie na różne firmy i zwykłych ludzi w necie.
19) Ankieta deweloperów Stack Overflow 2024 - omówienie https://survey.stackoverflow.co/2024/ INFO: Omówienie wyników corocznego badania od Stack Overflow. Jakie języki są najpopularniejsze, z jakich baz korzystają programiści, jak bardzo angażują w swoją pracę sztuczną inteligencję, co ich najbardziej wkurza i cieszy w pracy. Tego i wielu innych rzeczy dowiesz się z omówienia wyników ankiety.
20) Wykrywacz fałszywych pamięci Flash (karty/pendrive) https://fight-flash-fraud.readthedocs.io/en/latest/introduction.html INFO: Kupiłeś w azjatyckim sklepie za grosze pamięć flash mającą 4 TB, ale jakimś cudem nie możesz na nią wrzucić nawet pliku 4 GB? Prawdopodobnie ktoś majstrował przy tej pamięci i zmienił sposób, w jaki wykrywana jest ona w systemie. Ta aplikacja potrafi po pierwsze wykryć takie oszustwo, a po drugie potrafi nadpisać zmodyfikowane wartości pamięci flash, sprawiając, że da się z niej normalnie korzystać.
21) Jak usunąć elementy z tablic w JavaScript (9 sposobów) https://jsdev.space/howto/remove-from-array/ INFO: Niby prosta operacja, a jednak usuwanie elementów z tablicy w JS może sprawiać pewne problemy. Artykuł przedstawia aż dziewięć różnych metod na wykonanie tej operacji - od popularnych jak pop() i shift(), przez bardziej zaawansowane jak splice() i filter(), aż po te mniej znane techniki wykorzystujące operatory delete czy pętle.
22) Tekst dynamicznie dopasowany do szerokości - CSS https://kizu.dev/fit-to-width/ INFO: Artykuł przedstawia innowacyjne rozwiązanie problemu dopasowywania tekstu do szerokości kontenera w CSS, wykorzystując nową właściwość text-wrap: balance. Autor szczegółowo omawia różne scenariusze zastosowania tej techniki, pokazując, jak można ją wykorzystać do poprawy czytelności tekstu na stronach.
23) Sztuczki terminalowe - kompilacja jednolinijkowców https://github.com/onceupon/Bash-Oneliner INFO: Zbiór przydatnych poleceń i sztuczek terminalowych do przetwarzania danych i obsługi systemu Linux. Są to tzw. jednolinijkowce, które mogą przydać Ci się w codziennej pracy, a przy okazji mogą podnieść skilla związanego z pracą w terminalu.
24) Projektowanie bazy danych na przykładzie Google Calendar https://kb.databasedesignbook.com/posts/google-calendar/ INFO: To kolejny artykuł z serii związanej z projektowaniem oprogramowania. Tym razem autor stara się zaprojektować bazę danych pod projekt zbliżony do Google Kalendarza. To dobry materiał dla osób zainteresowanych inżynierią oprogramowania i projektowaniem baz danych.
25) Audapolis - edytor audio z automatyczną transkrypcją tekstu https://github.com/bugbakery/audapolis INFO: To otwartoźródłowa aplikacja bardzo podobna do komercyjnego Descript. Jeśli edytujesz plik dźwiękowy z mową ludzką (np. wykład), to widzisz jego transkrypcję. Usuwając słowa z transkrypcji, usuwasz je także z pliku dźwiękowego. Można więc żartobliwie powiedzieć, że jest to tekstowy edytor dźwiękowy.
26) Google NIE będzie blokować zewnętrznych cookiesów w Chrome https://stackdiary.com/google-will-not-phase-out-tracking-cookies-in-chrome-after-all/ INFO: Firma Google niespodziewanie zmieniła swoje plany dotyczące blokowania ciasteczek stron trzecich w przeglądarce. Miało to podnieść bezpieczeństwo użytkowników i mocno ograniczyć możliwość śledzenia. Jednak, zamiast całkowitej blokady takich ciastek, Google zamierza dać użytkownikom możliwość wyboru, czy chcą zezwolić na ich stosowanie, jednocześnie kontynuując prace nad alternatywnym rozwiązaniem, czyli Privacy Sandbox. To ważna decyzja, zwłaszcza dla branży reklamowej. Ciekawi mnie tylko, jak ta zgoda będzie przez użytkowników wyrażana i czy aby nie będzie to kolejne wkurzające pytanie przy wchodzeniu na każdą stronę.
== LINKI TYLKO DLA PATRONÓW ==
27) Pytania rekrutacyjne dla działów Red Team https://uw7.org/un_1c7cfd075ef37 INFO: Obszerny spis pytań i tematów związanych z działaniami zespołów Red Team, obejmujący szeroki zakres zagadnień od absolutnych podstaw, przez ataki na systemy Windows, aż po zaawansowane techniki omijania zabezpieczeń. Materiał może służyć zarówno jako przewodnik do przygotowania się do rozmów kwalifikacyjnych w zespołach Red Team, jak i jako zbiór tematów do poszerzania swoich kompetencji.
28) Wstęp do tworzenia exploitów - poradnik https://uw7.org/un_58620ef142d09 INFO: Poradnik przedstawia kompleksową ścieżkę nauki dla osób zainteresowanych wykorzystywaniem luk w zabezpieczeniach oprogramowania, ze szczególnym uwzględnieniem exploitacji binarnej. Autor szczegółowo omawia wymagane umiejętności, linkuje do materiałów dodatkowych oraz podaje kolejność ich przerabiania, koncentrując się na takich tematach jak programowanie w C, asembler x86-64, podstawy Linuxa oraz wprowadza kilka zaawansowanych technik ataku na aplikacje. Warto podążyć za linkami, ponieważ zawierają one między innymi zadania, których wykonanie zdecydowanie zwiększy Twoje zrozumienie tematu.
0 notes
Text
FYI: Data from deleted GitHub repos may not actually be deleted
http://securitytc.com/TB4FMm
0 notes
Text
GitHub spammer
Today the discussion board at jmonkeyengine's GitHub repository was hit by a spammer. I'd never seen GitHub spam before, and I hope to never see it again!
I'm not sure why the repo even has a discussion board. It's not like there's much traffic, since 90% of the project's discussions take place via Discourse. (Another 9% take place in Discord chat.) Prior to today, the GitHub board had 2 posts.
By the time I discovered the spam, the spammer's account had been deleted, so there was no action for me to take against the account. All I could do was delete the posts, which I did. There were about 8 or 9 posts, all very similar and easily recognized as spam.
I doubt the spammer benefited in any way from all this waste of time and bandwidth. What a pathetic piece of shit they must be!
#people suck#github#spammers#discussion#open source#discourse#discord chat#new ways#repository#time wasted#piece of shit
0 notes