Don't wanna be here? Send us removal request.
Text
Something Awesome - File IO CTF
The file IO ctf involved simple file redirection issues in Linux. The challenge involved passing in an integer as the file descriptor number and then writing some input to a buffer. If the buffer matches the hashcode then the flag is released.
The way I solved this was by simply passing file descriptor 0 (stdin) to the program and writing the correct hashcode.
This CTF represents a class of bugs in programs where the programmer did not anticipate for users to intentionally inject their own data into the program, and also gain access to the source code of the program.
0 notes
Text
Cryptographic Substitution Ciphers
Throughout the semester, I have spent a total of 2.5 hours on solving Cryptographic Substitution ciphers.

Some of the findings from my time solving these ciphers are that:
1. Sometimes you break the cipher at the most unexpected moments
Sometimes I spend hours or even days on one of the ciphers, and I still will not get the right answer. Other times I am able to complete the cipher instantly. This has taught me that breaking a cipher is not a matter of time, but a matter of progress.
If the cipher is particularly difficult or time consuming, as long as I am making progress by testing out different ways to solve the problem, I am being just as productive as if I had correctly solved a more simple cipher.
2. Look for the common word or letter
With thing that are encrypted with a substitution, especially in natural language, spotting a common word such as “to”, “and”, “the”, and “then” quickly make the entire problem much easier to solve.
This is also true for the common letters “a”, “e”, “i”.
0 notes
Text
Case Study 8 - Government Surveillance
Pro-Surveillance Reasons
The government should monitor network to prevent terrorist plots. Anti-terrorism in recent years has become central to a country’s national security.
The government has a responsibility to prevent predators from preying on children.
The government should monitor convicted criminals to ensure that they do not engage in further illegal activities.
Anti-Surveillance Reasons
A citizen’s data can be used in inappropriate ways, such as to find ways to wrongfully convict them, or to incorrectly identify them as a criminal or some other unwanted profile.
The government is frequently hacked, and leak all citizen data into the wrong hands. If this data is stolen, it would place every citizen and the whole nation at risk.
The government may become corrupt and sell the data. In the same way as getting hacked, if the government is infiltrated and leaks data, the nation would become insecure.
0 notes
Text
Case Study 6 - War With a Super Power
Ways A Nation (AUS) Would Be Attacked
- Psychological Warefare
Social media and corruption can be used to generate divisiveness between different ethnicities in this multicultural country. This is one of the most potent ways to attack a country, as internal conflict is extremely cheap, resourceful, and very damaging to the afflicted country.
A nation’s political system can also be attacked, by making the citizens distrust their government, or by forcing the government to make decisions which are unfavourable to the citizens. The democratic political system allows for a lot of individual freedom, which typically makes it easier to incite protests against the government. This can be done by creating deep fakes to frame a member of the government in a corrupt light. By turning the people against the government, the government will essentially be crippled and become more vulnerable to attacks.
- Take out technology
Most modern countries are extremely dependent on technology, and most modern warfare will be fought on the fronts of technological innovation. To attack the nation and its military, an attacker will take out the technological infrastructure of the country. This can include taking out the GPS system, destroy the routers that form computer networks, destroy traffic light systems, and take out electricity sources. This will send the nation back to the stone age, keeping them technologically incapacitated, and completely unable to function as a nation.
- Attack and Convert Allies
The nation is relatively far away from many countries, so one way to gain access to the nation is by converting the allies. Surrounding nations are poorer and more susceptible to influence, so by threatening the surrounding countries, you can make them attack the target nation.
Ways the Nation Can Defend Itself From a SuperPower
- Centralise News Source to Prevent False Information
The nation can make sure that anti-government news is not disseminated by restricting news sources to government agencies. This may make citizens more wary of the government, but this is important to prevent full-scale social engineering attacks by the enemy.
0 notes
Text
Case Study 5 - Smart Cars
Assets to Protect
As a policy maker, I would be worried about four assets in regards to the introduction of smart cars. These are privacy, safety, legalities, and employment.
The first asset would be the privacy of citizens. This is of great importance because smart cars are internet connected devices which means the companies running the software systems will have access to a wealth of personal, financial, commercial and other forms of data. This data will become a prime target for hackers which will if breached, will be a drastic breach of national security.
The second asset would be public safety. The policy maker is responsible for implementing policies which serve its citizens, as well as the country, so inspections and restrictions on the design of the smart car will be important for ensuring the safety and prosperity of the country.
The third important asset is the legal framework in place regarding transportation systems, vehicle malfunctions, and civilian automotive casualties. Smart cars are imperfect and are guaranteed to have incidents. As the policy maker, I will have to decide how to create a legal framework that will be agreeable to the population as well as the smart car company.
The last asset I will consider is employment. There are many people with jobs dependent on driving vehicles, such as transportation services and shipping. I will have to consider the ramification of the introduction smart cars on the employment rates in transportation industries, and how to ensure that the people being made redundant will be able to support themselves and find a new livelihood.
Ways to Address Three Highest Risks
One way to protect sensitive data is to impose legal restrictions on the types of data and the amount of data to collect. I would prevent companies from aggregating data on particularly sensitive data, and impose laws for better data management such as requiring companies to distribute data among different databases to prevent a single point of attack.
To protect public safety, I will impose laws requiring multiple layers of contigency plans for emergency crashes. Each car should have a laser sensing system, emergency brakes, and other systems to keep the car on track and have early crash detection.
Another requirement I will impose is for post-emergency detection. Upon the detection of an accident, I will require each car to contact the nearest available hospital, and to give specific information on the crash environment such as location, age of the person, etc. This information is crucial to allowing hospitals to respond quickly and effectively.
To reduce high rates of unemployment, I will restrict the speed of the smart car system rollout, such as by requiring the cars to be rolled out in smaller cities, then progressively to more larger cities over the period of ten years. This will give time for people in the transport industry to get reskilled and rehired, preventing drastic unemployment.
Recommendation
Overall, I would recommend that the smart car system be implemented in a small portion of the cars, until cryptography and smart cars become more advanced. I believe there are many benefits to using smart cars over normal human driven cars, such as a lower likelihood of crashing, more efficient road systems reducing transport costs, and a transfer of people out of low skill jobs into fields with greater earning potential. However at this point in time, there are too many risks associated to a full-scale replacement of human drivers, such as privacy concerns and technical vehicle faults. So I propose a smaller test trial of cars over the course of a few decades, until more information is gathered and the cars meet a high standard of safety and security.
0 notes
Text
Case Study 4 - Protect New Office
The top assets to protect in the office are:
Building access authentication
Public doors
Elevator
Staff special access doors
Authenticate by card
Project database or computer locations
Location of computers
Location of any servers
Location of physical objects used in project
Location of bins and hardware disposal
Office locations
Building access points
Reduction of points of failure
Building has two entry points with trained security guards and staff instead of many entrance
All contractors and hired service people must come through the entrance to be verified by the reception.
All bin collectors must pass through a designated spot.
Redundancy
Multifactor authentication
Retina scan
Fingerprint
Tailgate detection
Servers are in own location in building with restricted access
Security guards
Security cameras
Multiple layers of physical security instead of one layer
Building is separated into reception area for general access, than another access point for software engineers, than another separate access for private meetings with businessmen, and another for project managers.
0 notes
Text
Case Study 2 - Harry Houdini's Secret
His wife can verify the real psychics from the fake by setting up a protocol which depends on a shared secret between her and Houdini, but which uses a dynamically changing rule to prevent any one secret from being reused.
The way this protocol will work is that before Houdini's passing, he and his wife will decide upon a particular text or document that will be their shared secret. Then to verify the legitimacy of a psychic, his wife will ask them to tell her what a particular word in the text is. If the psychic is able to answer correctly, then they are a legitimate psychic, and must have contacted him to ask about the words. Those who were not able to answer will be determined to be fakes.
The dynamic secret will be that the correct word will be the nth word after the last word correctly guessed by a legitimate psychic, where n is the number of psychics who correctly guessed the secret. Initially the first word will be some random word chosen by Houdini and his wife. This changing secret is necessary because only Houdini's spirit as well as his wife will know how many people had correctly passed her test, which means that even if a psychic is legitimate and guesses the secret, they will not be able to tell any other psychic what the next word is, as they do not know how many other real psychics are. This secret is also very easy to remember, but almost impossible to guess, which means that the protocol will not be hard to store safely, and will not be easily discovered, further ensuring the safety of the protocol.
Some other groups came up with good principles to preserve secrets too:
Multiple layers of security
One team proposed to put in place multiple layers of authentication to ensure that the psychic has a lower probability of guessing their way past the authentication system. This makes the system less like an M&M security system, protecting the authentication system from single instances of luck.
2. Write otp before he dies
Another team suggested for Houdini and his wife to write a one-time pad. Then they would select a text to encrypt with the one-time pad which the psychic must consult with Houdini to decrypt the text with, and tell his wife the secret. This is a good method to ensure that the first psychic is legitimate, however this fails in the case when the psychic correctly determines the secret. Afterwards, the one time pad is now no longer good for decrypting the secret, as one-time pads cannot be reused, or else they can be broken easily using a cross-reference with other encrypted messages.
0 notes
Text
Security Everywhere - Commonwealth Bank lost data of 20m accounts
The commonwealth bank has reported that it improperly disposed of 15 years worth of data of 20 million customers. The data contained in these tapes included names, addresses, account numbers, and bank statements.
Despite this, the bank achieved record profits in that year, and maintains that the data is “likely” to have been destroyed properly. What could an attacker do with this data?
The most concerning thing that jumps out at me is impersonation and fraud by criminals. The potential is huge. With 20 million accounts, the amount of leaked data could cover almost the entire Australian population, as well as countless firms, both large and small. This is a prime source of information for attackers to exploit social engineering tools to attack cba customers.
If someone took found a recurring transaction on a bank statement, perhaps a loan payment, they can convincingly provide evidence that they are in fact the payee of the loan, and request for funds to be redirected to their account.
The same person can also leverage the financial details to steal more sensitive data. The bank account records of a company likely contains partners from the organisation that own the account. An attacker can then use this information claim to be a representative from the partner, and steal more financial data, or other highly sensitive information.
The risk is even higher for individuals who are simply paying off a regular debt, and do not have the resources to monitor and prevent fraud.
By improperly disposing of the data, Commonwealth bank may have just risked billions of dollars being stolen from its customers.
How should Commonwealth handle this situation and how should customers protect themselves?
Firstly, the Commonwealth bank should revise and upgrade their data destruction system. They should have in-house data destruction centres and have it tracked with GPS monitoring to ensure that their data is in the correct position. They should have anti-tamper devices to store their data, and have constant monitoring for their data. Furthermore, their data should be destroyed at different times and in different locations to prevent any one individual with accumulating all stolen data.
The customers should protect themselves by reviewing their bank statements for the past 15 years when someone or some organisation contacts them about their transactions. They should properly review the phone contacts of those organisations, and publish a list of highly suspicious callers, with the organisations on the list. Any financial decision or disclosure of information should be vetted by someone of a higher position. Individuals should remain vigilant and always double check the supposed organisation contact details through their official website, and also call back the official contact before making any changes to their debt repayments.
0 notes
Text
Security Everywhere - CloudFlare Global Outage Due to Regex Bug 02/07/19
Security Everywhere Cloudflare Global Outage July 2nd 2019
Cloudflare experienced a global outage of its CDNs for 27 minutes on the 2nd of July, making it the first global outage in eight years. It meant that 19% of all internet traffic was cut, with 550,000 businesses losing network connection for half an hour. Of the Cloudflare service users, the largest sector represented was in Hospital and Healthcare, meaning that hospitals were unable to provide efficient service to patients during that time, potentially endangering lives.
So how and why did it happen?
The root of the problem was a faulty Web Application Firewall (WAF) rule update, which was rolled out prematurely as a part of the new scheme to defend against emerging security threats. This was coupled with a poor recovery protocol as well as severe under-preparation by the Site Reliability Engineering team. This recent incident harks back to the first week of lectures, wherein we discussed that security issues often occur as a result of a chain of misfortunate events, or a chain of failures.
Unfortunately this seems like it is always the case, especially with large, dynamic and extremely complicated projects like building a Content Delivery Network, and it is extremely hard to mitigate. In this post I will discuss what went wrong, give my opinion on how they could have approached it better, and discuss their solutions.
The first error was introduced by a engineer who modified a WAF rule. The rule contained a single regex bug, which endlessly backtracked and blocked the processors from progressing. The reason this faulty regex was not caught was because Cloudflare had introduced a separate protocol for WAF rule updates, in response to increasing security threats, that allowed them to be pushed almost instantly to their customers and prevent any new malicious threats from manifesting.
The second error that occurred was that the WAF rule change was not even marked as urgent, it was of a standard priority. This meant that it should have passed through all the standard checks and been tested on internal sandboxes as well as incrementally rolled out. But because of the recent WAF rollout protocol changes, and the high volume of WAF rollouts (370 per month), this rule was not dispatched through the correct rollout procedure.
The third error was in the response to the global outage. Firstly, their internal response team was severely underprepared and did not know how to quickly and effectively handle the outage. On the initial discovery of the global outage, it took them time to understand that the outage was not caused by some unprecedented DDOS attack, but by their own system. Then they were unable to follow the protocol quickly because they had not been trained for such as situation, since the last global outage was more than eight years ago.
Secondly, the software they had in place to defend against a global outage was riddled with dependencies and fallacies that rendered them useless. When they had discovered the cause of the outage, they were not able to disable the offending WAF rule by flipping a global kill switch, because the kill switch required them to access their in-house authentication system. This authentication system was running on their own servers, which were hung. Effectively, the emergency stop button for their servers was only accessible from their unusable servers, much like leaving car keys locked inside one’s car.
Furthermore they needed to access a special bypass system to access their internal services including Jira and their build system, but were unable to do so because the system denied access to users who had not logged in frequently.
The result of these consecutive failures was a global outage that took out their entire network and the hundreds of thousands of services and websites running on them.
Now that you understand what went wrong, I will try and present my solutions on how to prevent this from ever occurring again.
Firstly, I do not believe that Cloudflare needs to re-evaluate their rollout process or reduce their WAF rule rollout frequency. They do not have a track record of failing rollouts, as they have not experienced a failure of this magnitude in eight years, despite their enormously high rate of WAF rule changes. Furthermore, they had proved that a previous immediate WAF rollout was able to prevent a highly malicious attack on their servers within the matter of hours, mitigating any serious damages to their client. I believe that human error inevitably occurs at some point in time, so it is the response protocol that is important.
I believe that they should have had fewer dependencies in their recovery system, especially since it was a last resort. Their response system was entirely dependent on the very servers that they were trying to rescue, meaning that they were in all respects rendered ineffective once the outage had occurred. They should have had their own dedicated servers which were tested against emergency circumstances to make sure that they can push changes throughout the CDNs when disaster strikes.
Secondly, they should compartmentalize their server CPU usage so that it does not hang as a result of one process. If they had used a different regex engine which did not continue processing instructions after a reasonable limit, or if they had built in a kill switch which disabled or bypassed processes which were hung, they would not have had this issue.
Thirdly, I believe they should have trained their emergency response team regularly, for example every six months, to be prepared for such occurrences. They should undergo drills to practice using the recovery systems, following the recovery protocol, and also researching new methods to design their recovery protocol to reduce the time taken and damage inflicted. These drills should be conducted on their internal networks with real attacks and real network destroying bugs purposefully installed to simulate this situation. Of course, further precautions should be taken to make sure these bugs do not end up in the production code.
The actual changes implemented by Cloudflare are as follows:
1. Re-introduce the excessive CPU usage protection that got removed. (Done)
2. Manually inspecting all 3,868 rules in the WAF Managed Rules to find and correct any other instances of possible excessive backtracking. (Inspection complete)
3. Introduce performance profiling for all rules to the test suite. (ETA: July 19)
4. Switching to either the re2 or Rust regex engine which both have run-time guarantees. (ETA: July 31)
5. Changing the SOP to do staged rollouts of rules in the same manner used for other software at Cloudflare while retaining the ability to do emergency global deployment for active attacks.
6. Putting in place an emergency ability to take the Cloudflare Dashboard and API off Cloudflare's edge.
7. Automating update of the Cloudflare Status page.
I believe that the actual changes implemented are quite poor. Firstly, I would like to commend them for resolving the error directly, by introducing CPU usage protections, as well as using a more reliable Regex Engine that will not backtrack indefinitely.
However the manual inspection of all 3868 WAF rules to look for specific instances of backtracking, as well as the performance profiling, is a poor approach in my opinion. The rules are being changed 370 times a month, or every 3 hours. Even if they had identified and fixed several faulty rules, in one year, there is the potential for a thousand new changes to be introduced. Furthermore, another undiscovered bug could lie in any part of their code, and be caused by a multitude of other reasons than solely regex backtracking. They have taken the approach of patching the error when it occurred, but not the root problem, which was their recovery system.
In their released response, it seems they entirely ignored decoupling their recovery system from their servers. It is like diving into a lake to save a drowning person, who could pull you underwater with them, or trying to use your hands to pull a person away from a live wire: the very thing you are trying to save is the danger, and by tying yourself to it, you are putting yourself in danger. They should have addressed this dependency issue, as an effective recovery system would resolve all cases of system failure immediately, which is more effective than trying to pre-emptively discover all potential bugs.
Overall this case study highlights a few points. Firstly it highlights that large companies are very prone to major, devastating bugs and failures, so no one should rely on any one service entirely, or you are simply joining a sinking ship with a hole no one has found yet. If you are a business using a CDN, run some of your services through another CDN, or have your own in house backup servers to handle emergency network failure. Secondly, it highlights that even large companies with extremely experienced professions do not always respond correctly to emergencies, which highlights the need for retraining or hiring more adequately trained professionals.
0 notes
Text
WEP Weaknesses - Stream Cipher
The WEP protocol was deemed weak and unsuitable for wireless packet transmission security due to its use of RC4 as the encryption mechanism.
RC4 is known as a stream cipher, as it operates on individual successive bits on a bit stream, instead of operating on blocks of plain text at a time.
This particular encryption method used a persistent master key, commonly 40 bits long, as well as an initialisation vector that was appended to the master key. This initialisation vector was used to seed a pseudo-random sequence to use as the “scrambler”. The output stream would then be the result of some function applied on the pseudo-random sequence and the plaintext.
However the problem was that the IV only 24 bits in length. A 24 bit IV will likely have a collision every 2^12 messages. This is effectively reusing the same same key for encryption.
This issue with this is that when the same key is used, stream ciphers become extremely vulnerable. This is because they, like RC4, commonly used the XOR function, which has the property its arguments being mutable. So when two cipher texts with the same key are XOR’d against each other, they will cancel out the pseudo-random sequence and produce the XOR of the plaintexts.
This then becomes a simple task of solving a polyalphabetic cipher, which computers can do quickly, and the length of the longer message greater than the length of the shorter message will be in plaintext.
0 notes
Text
Single vs Multiple Encryption
Multiple encryption is not substantially more secure than single encryption and can even be more insecure. This is because the majority of attacks do not break encryption protocols anyway, they usually expose some human error than enables the attacker to enter the system. However multiple encryption is prone to weaknesses, some insubstantial, and some severe.
Weaknesses:
- When encrypting with one key, if proper cryptographic function for authentication is used, the message has integrity and authentication. With double encryption, the outer algorithm may be prone to bits being changed.
- With some double authentication schemes, encryption with two different keys results in the cipher being able to be decrypted using a third key.
- Meet in the middle attacks take advantage of time-space tradeoff to decrypt both keys in way less than the brute force time of breaking the keys individually.
- The actually increase in difficulty of using multiple encryption is difficult to prove and has not been done convincingly so far, meaning it probably is not time to adopt it as the better form of encryption, yet.
0 notes
Text
Flaws in PKI
Concept of the Weakest Link
95% of security breaches over SSL occured because of some form of human error rather than an error in the underlying PKI. If a person mismanages their private key or stores it in a vulnerable location, and the key is stolen, then the certificate is meaningless. The human is the weakest link that breaks the PKI chain.
Certificate authorities can also introduce errors in signing, as the initial identification process for registration is usually performed by a person over email or some other form of indirect contact. Furthermore Certificate Authorities are not practised in identification, which means they are more susceptible to identity fraud. This means a clever hacker can retrieve fake details and spoof their identity.
Furthermore a PKIs are unable to provide holistic security for a user browsing a website. They only enable authentication of a website domain, and do not prevent users from ignoring their warning, which inevitably occurs.
Corruption
Certificate authorities are corruptible as they have a conflict of interest: they are supposed to ensure only valid websites are signed, but they also want to sign all websites to earn more money. This means that they have an economic incentive to reduce the stringency of their protocol, and even disobey it, allowing suspicious websites to be authenticated.
Hash Collisions
The underlying PKI is not entirely secure, because it relies on several parameters being tuned which have in practice been incorrectly set. For example, the PKI used MD5 hashes for security, which was severely broken. This enabled hackers to generate fake certificates that authorised them as root Certificate Authorities and essentially take control over the entire PKI. They would have been able to create any website and pass it as legitimate, which could have broken the internet.
0 notes
Text
Man-In-The-Middle Attacks on Public Key Infrastructure
The PKI system relies on certificates to authenticate packets coming over a network. However a man in the middle attack can be performed to spoof data or simply intercept and log data.
With self-signed certificates, an attacker can simply intercept the connection between the browser and the server and send their own fake certificate. Since the certificate is self-signed, any certificate will be treated as legitimate.
This is solved by a root certificate authority. Certificate authorities, if they are to be held as absolutely trustworthy authorities, have their public keys stored on the browser upon installation. The user can then use the public key of the certificate authority to verify that the public key of the website was signed and authorised by the authority. This means that any man-in-the-middle attack will be unable to inject their own public key, as this will not correspond to the public key in the certificate.
0 notes
Text
San Francisco Facial Recognition Ban
San Francisco passed a bill where facial recognition by state authorities was legally banned.
This came after concerns that facial recognition for the purposes of law enforcement posed a great privacy and security risk.
There are several issues with facial recognition when it comes to a person’s freedom and their security:
- You lose anonymity:
With the use of facial recognition software, anyone can be tracked from anywhere at anytime through any interface. You could be going to a store or going to work, or even taking a taxi. The collection of your data could grow to the extent where your very location as well as your predicted locations can be deduced, which poses a great privacy concern to all people.
- You may be misidentified:
The use of facial recognition software for law enforcement is greatly problematic. Applications such as body cams for police which have live facial recognition is highly prone to risk and guaranteed to have a false acceptance rate greater than zero. This means that people could be detained, arrested, assaulted or even killed within legal limits by law enforcement due to the introduction of this technology.
- You are constantly monitored:
Government authorities will be able to gather more information than ever on your whereabouts and your activities, which means that you will be classified in to more categories and be placed under more scrutiny and supervision. This poses a risk to everyones’ safety and security.
0 notes
Text
Biometrics In Security
Biometrics are widely applied in many security domains for the purposes of identification and authentication. The difference between the two is that identification asks the question ‘Who is this person?”, whereas authentication asks “Is this person who they claim they are?”.
Biometrics have become more accurate in recent years, and have been able to perform identification and authentication checks in real-time, dynamic situations.
One example is in live crowd facial recognition. This software performs real-time checks using various computer vision techniques to identify individual faces in a crowd of faces of just 20px in dimensions.
Facial recognition techniques are applied on 1 billion passports worldwide and used to authenticate travellers in airports. This is widely applied in many countries including the US with the IDENT system, and in the EU with EURODAC.
In some countries, such as Gaboon, biometric data such as fingerprints and id photos are used as a form of legitimate citizenship identification. This is used to authenticate citizens for access to social services and healthcare, to prevent accounts of identity fraud that costs the country a substantial amount every year.
The word’s largest biometric gathering project, the Aadhaar project, creates a 16 digit identification number from 12 different biometric vectors, including iris scans and fingerprint scans. This system is a voluntary opt-in for Indian citizens, and does not count as proof of citizenship, but it does qualify as a proof of residency. Currently 99% of the adult population of India is identified in the Aadhaar project.
Identification through Aadhaar is used for identification in banking and financial institutions, educational institutions, purchasing sim cards, welfare schemes and more.
However the identification scheme is controversial as it discourages people with sensitive private data from accessing services that they require, such as people with HIV from accessing treatment, as they require giving their biometric data to do so.
Biometric authentication is also being implemented in a distributed, decentralised way, with some companies implementing banking cards that hold the biometric data of the owner. Traditional biometric authentication requires access to a central repository of biometric data, introducing inconvenience issues, higher risk of security compromises, and privacy concerns. The banking cards enable the user themselves to own their data, keep it private, authenticate themselves faster, and to reduce the risk of their identity data being hacked, as their information is decentralised.
0 notes
Text
NSW Land Titles Privatisation: Assets and Risks Involved
Tangible Assets:
- Legal property ownership documents
Intangible Assets:
- Australian property rights and legal authority over property ownership
- Privacy of Australian’s property ownership
Risks:
- Increase in costs to home owners when buying or selling
- Increase in housing prices to reflect increased transaction costs
- Increase in inefficiency compared to prior state-run functioning
- Decrease in overall return to the government
- Domestic documents released offshore and held in the property of another sovereign, increasing risk of potential blackmail or other nefarious situations
- Increase of risk in private data being exposed or exploited
0 notes
Text
HMAC
The HMAC is a widely used MAC function, applied in SSL, which is used to provide security in the transport layer of a computer network.
The MAC is flawed because it is vulnerable to a second-preimage attack, namely if someone finds a hash h(second-preimage) == h(plaintext), then they will also be able to generate the same MAC code despite not having the key, due to h(key|plaintext) == h(key|second-preimage) if the first condition is true.
The HMAC solves this problem by rehashing the MAC with a second key. This effectively re-scrambles the MAC so that they must now brute-force the second key as well. There is currently no known way for an attacker to generate the correct HMAC in a reasonable amount of time without knowing both keys.
0 notes