Text
Authentication, bypassing the system
Since Barbara signs up for new websites using the Sign in With Google, I’m going to get her account credentials by creating a fake website and tricking her into signing up for it. We need to trick her into visiting our website, maybe we can achieve this with a targeted phising email, or by pretending to be a site she commonly visits like amazone.com. When she signs up to the usual website she’ll get a sign in with Google button except it’s fake. We’ll get her to send her credentials to us, her username, and password. We’ll then login which will trigger her google authenticator, she’ll be expecting this and gladly send us the code, which we can then give to google to progress the process. The final problem is we’re logging in from a computer she doesn’t trust, here we’ll make up some bs, and display a popup saying that all devices must be reproven every 90 days. She’ll enter one of her 10 one-use only passwords. From here we’ll then deregister every device, and change the details of google authenticator. Que the hollywood hacking scenes: <frantic keyboard-presses> I’m in.
This is a pretty common thing done by criminals like these guys. By posing as google you can trick users into allowing “google” access to your account. What they’re trying to do is scam users into handing over their OAuth token. While what I propose takes it a step further by gaining the password as well, OAuth is very easily abused for malicious intent, maybe all you need is access to the account to achieve your malicious goal.
https://www.pcworld.com/article/3192484/russian-hackers-use-oauth-fake-google-apps-to-phish-users.html
50/50 chance I’d click allow if I knew nothing about computers tbh.
OAuth is pretty dumb, and it sucks for a lot of reasons; implementation is difficult and complex, and providing access to a malicious service allows it to impersonate you on other services. Where possible sign up to an account with your email address, and use a password manager to come up with a decent password.
The lesson to learn here is never to accept an OAuth token request from a service you didn’t ask for. But how do you know what is, and isn’t legitimate?
2 notes
·
View notes
Text
Social Engineering: The Art of Deception
I started reading Kevin Mitnick’s book a few weeks ago, I’m about halfway through, and haven’t enjoyed it as much as the others I’ve read.
The book basically serves as a series of case studies providing examples of social engineers working to deceive, and break into systems. What I like from the book is it serves less as a tutorial on how to perform social engineering and more advice on how to make you aware that someone is performing it. The other cool think is conflating information with authentication, this is something that Richard spoke about, but the general premise is tricking people into giving you more information because you know certain things. Some great examples of this are knowing acronyms, or using the correct jargon. A great example was tricking cops into giving you information, by learning acronyms from a publicly available training manual, and pretending to be a superior officer, abusing their trust in authority.
I’ll try and finish the book off in the next couple of days, I think social engineering is probably something that can’t be learnt, I think you need a lot of natural skills from the get go, and if you don’t you’re probably not going to be able to learn them.
0 notes
Text
Format String Exploits: eli5
This is something I’ve tried to learn over, and over again. Each time I get overwhelmed by the screenshots of gdb, and tune out. But I finally got it. I’m going to explain it in the way that finally made sense to me, and hopefully it helps out some other people too!
The goal, we want to use the format string to alter some values in memory, this could be something simple like changing a value of something on the stack, or altering the GOT to run shellcode or something else malicious.
We start by doing something like putting AAAA at the start of the string, then follow it up with a bunch of %x (the goal is to find the address of those AAAA’s on the stack) That’s the block of memory we can write to, and now read from. The next thing is the %n this guy will take the number of bytes that have been written and write it to the address you specify. So using this we can place an arbitrary address on the stack, and then write any number to it by using %n. This allows us to overwrite things like function pointers, and other malicious stuff. I’ll try to have an example from the CTF but that’s how they work.
Pretty potent but easily fixed, whenever you want to display user input do this:
printf(”%s”, user_buf);
rather than this:
printf(user_buf);
0 notes
Text
PKI & Web of Trust
I read through Bruce Schneiers paper on the top 10 issues with PKI, and I discussed some of the issues I had with it with friends as a follow up. I’m not sure if I necessarily agree with all of his views, but I do think it allows for some interesting issues, especially the one I shared earlier about Kazakhstan.
It’s a great paper and definitely worth reading, a lot of the criticism comes down to who, and why does someone get to become a certificate authority, the fact that it doesn’t solve all the problems it claims it does, and the dangers of deriving the term trust that humans are more familiar with based on the trust definition in cryptography.
https://www.schneier.com/academic/paperfiles/paper-pki.pdf
My limited understanding is that PKI essentially provides a client, and a server with a trusted third party that allows us to verify the identity of a server. But it has some limitations in that the certificate authority is incentivised to give out as many certificates as possible, as this is how it gets money! It doesn’t prevent you from being scammed by a third party doing something interesting like setting up a fake google website called https://gooooogle.com. Now if you visit that is it the real google or is it a fake one? The green padlock is there, so are most people going to trust it?
The certificates used are known as X.509 these also serve as the basis for SSL/TLS. When an organisation wants a certificate there’s a few hoops they need to jump through. They create a signed certificate request using an asymmetric key pair. As usual they keep the private key to themselves. The request contains their distinguished name, as well as the public key that is used to verify the signature of the request. Sometimes other forms of proof are contained within the request. After this the certificate authority issues a certificate or further proof is required. This can usually be done via sticking some text on a webpage owned by the organisation, or by sending an email to someone with the distinguished name. You’ll also see self signed root certificates distributed by a company so that their employees can use the companies PKI system this usually grans them access to the intranet and other services provided internally by the company.
Certificates then have a chain to proof validity. either a root certificate is in the browser for the user already in which case it is already good, otherwise the certificate will link to another certificate, creating a chain. Once we hit a certificate in the chain that is trusted then the original certificate is also trusted. If we never reach this point then we cannot verify the validity of it. This is common when you self-sign a certificate on your web-server.
This is cool because it’s quite simple for the end-user all the complexity is kept on the side of the system administrator meaning that there is no real technical knowledge required. This can be quite dangerous, as users incorrectly infer what a valid certificate really means; extrapolating more information than what they should. An alternative that handles this much better in my view is the web of trust.
In fact here’s a cool writeup of how criminals exploit digital certificates to then abuse the trust people have for them: https://resources.infosecinstitute.com/cybercrime-exploits-digital-certificates/
The web of trust is a more decentralized version, the concept is best described by Phil Zimmermann who originally defined the concept as:
As time goes on, you will accumulate keys from other people that you may want to designate as trusted introducers. Everyone else will each choose their own trusted introducers. And everyone will gradually accumulate and distribute with their key a collection of certifying signatures from other people, with the expectation that anyone receiving it will trust at least one or two of the signatures. This will cause the emergence of a decentralized fault-tolerant web of confidence for all public keys.
I think this better follows a more human, community based approach to the problem. This provides the user with a lot more control to define exactly who it is that they do, and do not trust. Users can implement a vote count based system for determining trust, or implement a partial trust based system. Giving users more control comes at the expense of responsibility, individual users are required to manage their private keys. Lose the key and you lose access to all the data. A nightmare if people continue to send you messages using this key. This method is also a lot more complicated, and requires a lot more underlying knowledge about how this process works. This is hard, and yesterday I struggled to explain to my Grandma how to pick a good password, I think this is beyond what we can achieve.
Regardless Bruce Schneier is a really smart dude, and I enjoy his style of writing, there’s a lot of good stuff to unpack from his teachings.
Attribution:
PKI Image: https://commons.wikimedia.org/w/index.php?curid=2501151
1 note
·
View note
Text
Human Weakness & Moral Hazards
I've decided to look into the challenger disaster.
youtube
Lead Up
The writing for this disaster was on the wall a long time before launch. Reports as early as 10 years prior discussed the safety record of solid rockets, and the inability for a safe abortion of flight after certain failures. Engineers wrote to the managers of the Solid Rocket Booster project in which they shared their views of problem of O-Ring erosion, and how this may induce a failure in the rocket from which couldn't be recovered. Standard NASA regulations were ignored, as this problem was not reported to senior management, but instead kept secret. The O-rings were re-designated as "Criticality 1" and yet still no one suggested the shuttle be grounded until the flaw could be fixed, and tested. In the later commission Donald Kutyna, likened this situation to an airline permitting one of its planes to continue to fly despite evidence that one of its wings was about to fall off.
Pre-launch Pressure
Forecasts for launch day predicted an unusually cold morning, with temperatures close to -1 Celsius, close to the min temperature required permitted for launch, much of the components onboard the shuttle had no test data to support any expectation of a successful launch in the given conditions. Numerous engineers expressed their concerns with the lack of test data available to indicate the likelihood of a successful launch, again their concerns with the O-rings, and the lack of redundancy should they fail. Thiokol management initially supported the recommendations to post-pone the launch, in a meeting with NASA who opposed any delays. Unusually it is forbidden by NASA to rely on a backup for a "Criticality 1" component going against what they did on the launch. Later Thiokol management changed their mind, and supported NASA's decision to go ahead with the launch, NASA did not ask why they changed their mind.

I think the really frustrating thing here is the level of arrogance made by people, and the disregard for the rules, and procedures put in place to keep people safe. Multiple rules were broken which should have led to an abandonment of the mission, with everyone going back to the drawing board, and re-planning the launch. Instead they naively went ahead despite multiple engineers expressing their concenrs with the performance of an O-ring and the lack of evidence to suggest whether it would perform given the temperatures.
Post Coverup
I think the more depressing thing then the actual outcome of the disaster was the cover up afterwards. President Reagan orgainised for a presidential commission. NASA were criticised for their lack on openness with the press, who in turn responded with speculation. Feynman was a member of the commission who performed his own investigation, bypassing the top brass of NASA, and speaking directly with engineers, and technicians. He found that the estimates of reliability made by NASA's management were wildly unrealistic, and differed as much as a thousandfold form the estimates of the engineers. Demonstrating a huge failing, and lack of safety culture, going directly against the public perception of NASA at the time of the disaster. NASA was arrogant, and thought given their safety record, and public perception they could get away with launching a dangerously unsafe shuttle.
I want to finish this off by just expressing how much of a legend Richard Feynman is; terminally ill with cancer spent much of his final months finding the root cause of the disaster so that he could speak plainly with the public on his findings. This video of him is absolutely legendary.
youtube
1 note
·
View note
Text
Cryptoprocessors, and hardware security modules
Secure cyrptoprocessors are dedicated microprocessors for carrying out cryptographic operations, they usually have some form of embedding to make any manipulation tamper-evident.
Hardware security modules contain one or more cryptoprocessors.
The trusted platform module is an international standard for a secure cyrptoprocessor
Cryptoprocessors are cool because they don't reveal keys or executable instructions on a bus, their purpose is to act as the keystone of a security sub-system, eliminating the need to protect the rest of the sub-system with physical security measures.
The goal is the cryptoprocessor recieves instructions in an encrypted form, decrypts them inside its perimiter to plain instructions, which it then executes within the perimiter. By never revealing the program instructions it prevents tampering of programs by technicians who may have legitimate access to the bus.
Cryptoprocessors have a number of pretty sweet features that ensure their security: * Tamper evidence * Conductive shield layers that prevent reading of internal signals * Hardware based registers that implement a one-way privilege sepeartion model * Automatically zeros out secrets in the even of tampering. * Chain of trust boot-loader which authenticates the operating system, before loading. * Chain of trust operating system with authenticates software before loading. * Internal battery
It provides you with some cool stuff like a random number generator, the ability to securely generate cryptographic keys, remote attestation which creates a nearly unforgeable hash summary of the hardware, and software configuration. Helping to verify that software has not been changed. We can also use it to ensure platform integrity, enable full disk-encryption and digital rights management.
The most common secure cryptoprocessor you know of is going to be a smart card. A piece of plastic the size of a credit card with an embedded integrated circuit.
How safe are they though? The IBM 4758 was attacked by a team at the University of Cambridge. Who found a flaw in the software loaded by the cryptoprocessor, making all the hardware features redundant!
1 note
·
View note
Text
Steganography, and reading between the eyes
I had a bit of an issue getting the flag from the husky image, I tried the usual tricks of trying strings, and hexdump to see if it had been added in the image anywhere. After a while I watched this computerphile video which shows some common techniques that can be used to hide information in images, and it was really interesting! One of the common ones is the least significant bit algorithm, essentially you just take the rightmost bit of each pixel and replace it with a data bit, you can then easily decode this. Obviously if you’re planning on doing this you should probably use a symmetric cipher too, since you’re effectively just obfuscating the message that you wish to send.
I still had issues decoding this, there’s a number of tools that will give you an ascii representation of an images least signifcant bits, when I did this there was just a bunch of noise, and no clear evidence there was a flag.
youtube
I knew the flag was in the form of picoctf{...} so after a little bit of research I found the source of the problem. The original image is over 6Mb, the one linked on imgur that I had been trying to decode was less than half a meg. Clearly imgur applied some lossy compression algorithm, and we lost the data bits. Trying the original image which I found on a writeup on github got me the flag.
Sources:
https://www.cybrary.it/0p3n/hide-secret-message-inside-image-using-lsb-steganography/
https://github.com/mzfr/ctf-writeups/blob/master/picoCTF-2018/Forensics/Reading%20Between%20the%20Eyes/readme.md
0 notes
Text
Comes at an interesting time considering Dutton’s just asked for an extension on the allowed time frame for metadata to be retained.
0 notes
Text
Hey Grandma
Ok so the deal with passwords is you need to pick good secure ones, short ones don’t cut it because computers are really good at trying lots of passwords in a very short amount of time. How do we pick good secure ones? That’s a great question that many people have very wrong ideas about. I’d say your best option is to pick a sentence or phrase that you really like that doesn’t really mean much to anyone else. Then take the first letter of each character, make some of those capitalised, and then add in numbers, and special characters as required. This will hopefully give you a password that’s more or less 15 characters in length. This should mean it’s too hard for even the fastest of computers to break. Most importantly don’t write it down, and don’t tell anyone what that sentence is, or your password! No one ever needs to know that password, if they tell you they do they’re lying.
The most important thing is to prevent password use, because once someone bad has found a password it’s really easy for them to use it again! Luckily for you, you only have two passwords to remember, the one for your email, and the one for your Facebook account! In fact it’d be even better if you could forget your email password, as that way you can’t check it and it’ll be impossible for you to fall victim to all the attacks your generation are so good at falling for! If you were a more savvy user I would suggest you use a password manager, but my fear here is you’ll set a weak master password, or it’ll be too difficult for you to use.
Love from your favorite grandchild.
1 note
·
View note
Text
Threat Modelling
Bit of a late one, but I’m working my way through the last few I haven’t yet done.
This was a cool one, I like thinking like the bad guy and start to think about what we can do to prevent them from doing this stuff. I broke my threats down into four categories, physical harm, financial harm, reputation damage, and attacks with regards to national security.
The energy network is the lifeblood of our economy, so in a wartime event there’s huge incentive for an enemy state to take out the grid, I assumed they’d achieve this by either destroying plants in an approach similar to stuxnet, or through an insider attack. Similarly destroying the wiring that connects households, and business to the grids could be targeted.
Repuation damage is more focused on either stealing user data, or by leaking internal documents that potentially put the companies in a bad light. This could be done by either penetrating their network, or by a spearphising attack, key stakeholders in these types of companies are a little bit to blasé about what it is they do, and what they have access to on their linkedin accounts.
Financial harm is just the usual, oppurtunistic hackers could perform targeted ransomware attacks asking for money, or hacktivisits could target the infrastructure for being associated with acts like Adani.
The final area I focused on was physical harm to staff, this could be done through targeted attacks, perhaps on call center workers who chased up old bills, or more senior members of staff again by hacktivists, or criminals wishing to glean more information about the system.
I did this one on a whiteboard so here’s my attack tree

0 notes
Text
Week 8 Review
Works been pretty busy this week so I haven’t been able to get through the backlog of modules as quickly as possible, and then caff released the CTF yesterday and that was a whole other rabbit hole. There’s about four activities I’m still behind on, I’m hoping to get through one tonight, two tomorrow and the last on Sunday. I haven’t been exactly putting them off it’s just that everything else has been so much more fun, like going through malware, CTF and researching some different stuff from OWASP Top 10. I think this is good stuff to be doing too, but I need to finish off those modules.
I think this week was quite productive, I did some reading on Chernobyl, made a start on the CTF, got my something awesome project done, and set up metasploitable; there was a great tutorial I’ll link that shows how to convert VMware to hyper-v images. I finally got to take a look at GHidra too, which is pretty rad since the decompile function is nice, and I sure as shit can’t afford IDA Pro.
I’m gonna try plan another lightning talk hopefully for week 9 but maybe more likely week 10. I’ve ordered some USBs that can have custom binaries flashed too them. The goal is to try and create some ghetto rubber ducky’s that cost $3 as opposed to $50+!
I got some more work done on my job app too, I think it’s more or less good to go now, just needs some editing and then I need to chuck some links in to backup all the stuff I’ve claimed.
Been a fun week, sad that there’s only a few more to go.
(I didn’t realise there were gifs I could embed, where has this been the last 8 weeks!?!?)
0 notes
Text
oh no more overfl0ws
Caff sneakily added another buffer overflow activity and I didn’t even see it! This one seems really similar to the others so I did it in a very similar way, normally I would of opened it in IDA to have a cheeky geez at the different functions and their addresses but I’m trying to practice this in a more exam style environment, I used nm to list all the function addresses, it’s probably gonna be the function titled PlzDontCallThisItsASercretFunction, what are they gonna do if I call it???
I did the same thing as the ones from before just kept playing around with the length of the buffer along with the address of the function, I checked the output of strace to see where we trying to point to. Eventually I got the length right, and we get what we’re looking for:
Hi, welcome to the COMP6841 Feedback form! What feedback would you like to give? One line only, we can't fix too much at once! Thank you, your feedback has been sent to `/dev/null`!
COMP6841{STRINGS_WONT_ALWAYS_WORK} Illegal instruction
Yeet.
0 notes
Text
5G
Kinda just playing devils advocate here, personally I think Huawei shouldn’t be allowed to be part of the 5G network but I also don’t really agree with the Australian government just doing whatever the American Government tells them to.
Dear Mr Trump,
Sorry to say this but we’re going to be letting Huawei roll out their 5G network, the reasons are simple they’re promising to do it for a third of the price of every other competitor and I think it’s a better use of taxpayer money to save money on this rollout, and instead use the extra money to fund education, and healthcare.
The truth is we know that China is going to be spying on us, but let’s be honest the US are spying on us too right? This kind of espionage is going to be taking place regardless of the manufacturers we choose, and as such I’ve decided the Australian public might as well get some small benefit out of this. Quite frankly I don’t believe your tariffs are sound, or intelligent. This is a great opportunity for Australia to bolster its relations with China, and something I’m really excited to be working with them on.
Whether you like it or not, America’s on its last legs, and China’s the new world power.
Kind Regards,
Scummo
4 notes
·
View notes
Text
Privacy
After last nights seminar, and because it’s a module this week I’m downloading all the data facebook as on me. To speed up the process I’ve just decided to download information about me, rather than the content, and images I’ve posted. I did this because I’m pretty comfortable with the information I’ve shared but am more interested in what Facebook has inferred from that information, maybe that’ll finally give me the wake up call to deactivate my account.
I’ve downloaded all my information, and I’ll post what I share later, the most interesting parts are the advertising, how they’ve been able to infer what kind of ads I should be shown, some of them are quite accurate, but then others have nothing at all I’m interested in. The spookiest was lockpicking, I haven’t searched for that on Facebook, but I did purchase some picks a month ago. That has been inferred and is kind of spooky.
The worst one is the address book though, every single contact I’ve ever had has been uploaded to Facebook along with creepy shit like when I last contacted them. I don’t think I’ve ever said yes to let Facebook scan my contacts before so maybe they’ve done that without my permission? Interestingly they’ve linked my old boss to a Facebook account despite the fact he never signed up, does this infer that they’re storing additional info about people not in the system?
I’ll try and look into this more, but it’s kinda gross either way. All the usual stuff is there too like search history, which was huge! I’ve had this account for about 10 years though so I guess that’s not too surprising.
Tell you what though I think the google maps one that shows you everywhere you’ve been is sick. That’s gotta be one of the coolest things I’ve seen in a while, they should have this as the main reason to sign it. A little bit scary given what you could achieve with it but also cool as heck.
6 notes
·
View notes
Text
Week 8 lectures
Week 8 lectures were cool I really like when Richard explores other domains and shows the parallels as well as what we can learn to improve systems in our own field of work. I think the most powerful metaphor of the course was Richard shining his light around the dark lecture room; we can only see what we focus the light on and we miss everything around us. What I really took away from the morning lecture was ensuring you’re focusing on security from the start, if you don’t you’ll be ok in the sense that most of the holes won’t line up most of the time, but a determined hacker WILL make the holes line up, and then you’re screwed. Richard demonstrated this with a previous uni assignment for a first year course with him as the attacker.
We finished off the evening lecture with a seminar on privacy and forensics both were awesome! The privacy talk had some cool demos showing how much information different companies have on you as well as experiments creating faraday cages. The forensics went through the process of modern forensics using a device that blocks write requests to the drive to ensure evidence isn’t planted but allows you to clone the drive so you don’t wear it out. Then Richard went through more on the three mile island accident. Again this was really interesting as it shows a really complex system with a lot of moving parts, a lot like what we have in modern online systems. Finally he closed with some advice, what’s the best way to ensure data can’t be breached? Destroy it once you’re done with it. Question what you really need to actually have, and discard the rest. Write two press releases for your employer, and let them decide which they’d rather give out when disaster strikes. It’s not a matter of if but rather when.
I’m missing the tutorial today which I’m really bummed about, it’s the only one I’ll miss but I’m sad that I won’t get to see all the cool stuff people were working on for their something awesomes.
We’ve got to explore one of three different disasters as it will appear in the final exam. I know a bit about Chernobyl so I might explore the challenger disaster instead.
2 notes
·
View notes
Video
youtube
This was my final week of the project so I finished it off with a quick walkthrough of how to hide modules from kldstat. I wanted to try to hide tcp ports as well but ran out of time.
1 note
·
View note