stemandleafdiagram
stemandleafdiagram
33 posts
a-levels | 17
Don't wanna be here? Send us removal request.
stemandleafdiagram · 3 days ago
Text
Let's write a C++ program together. I'll start us off.
int main() {
823 notes · View notes
stemandleafdiagram · 3 days ago
Note
dna results came back positive
a
String identified: a
Closest match: ADENINE!!!
Tumblr media
6K notes · View notes
stemandleafdiagram · 6 days ago
Note
what are your favorite lichens in each color of the rainbow? Are there any purple lichens?
I am happy you asked:
Tumblr media
Red: Lasallia rubignosa
Orange: Teloschistes capensis
Yellow: Candelariella rosulans
Green: Sticta canariensis
Blue: Lobarina scorbiculata
Purple: Peltigera hymenina
2K notes · View notes
stemandleafdiagram · 6 days ago
Text
YO NEW HELL ANT JUST DROPPED
Tumblr media
IN THE SOUTHERN HEMISPHERE NO LESS
This is from a study just published a few days ago by Lepeco et al. (2025).
Hell ants (subfamily: Haidomyrmecinae) are super cool ants that possessed vertically closing jaws.
Below is a model of the ant's body/anatomy generated via micro-computed tomography
Tumblr media
Vulcanidris cratensis is a new species discovered in Brazil. It dates back to the lower Cretaceous, and at the time of writing, is now the oldest known ant to science!
Tumblr media
831 notes · View notes
stemandleafdiagram · 6 days ago
Text
Tumblr media
all my homies hate oganessum
3K notes · View notes
stemandleafdiagram · 7 days ago
Text
Modern Cryptography
(stemandleafdiagram long-form post!)
~ 2900 words
As you may have guessed, I love cryptography and ciphers even though I still don’t know much about them. I think classical ciphers are super interesting stuff and I love breaking them but I realised I knew nothing cipher-y after the end of World War 2, so I sought to rectify that!
(This is SO long - I got quite carried away (I do apologise), and as I wanted to cover so much here there are some concepts I don’t explain very thoroughly, but there are so many resources online if you want to learn more! When explaining how different forms of encryption work, I will often use the names commonly used by other cryptographers in my examples. Alice and Bob are two people trying to communicate, while Eve (an eavesdropper) is trying to intercept their messages.)
Symmetric Encryption
The start of modern cryptography is not a definite thing (obviously so, as the “eras” of cryptography are just labels people use to refer to them generally) but I decided to start my timeline for modern cryptography in the 1960s, as during this time, research projects at the American company IBM (International Business Machines) led to the creation of a cipher called the Lucifer cipher. 
This cipher was one of the first block ciphers to be made. A block cipher is a cipher that operates on blocks of 128 bits at a time. This is in contrast to a stream cipher, which encrypts 1 bit of data at a time. (In a way, you could consider classical ciphers stream ciphers) If the plaintext (un-encrypted data) is smaller than 128, padding schemes will add random data to it to make it up to 128. Modes of operation define how large amounts of data are encrypted. For example, the blocks of data can be encoded separately, or maybe the encryption of one block is affected by the previous encoded block of data.
The Lucifer cipher underwent a lot of alterations, and eventually the National Bureau of Standards adopted this altered version of Lucifer as the Data Encryption Standard, or DES, in 1977. Some of the alterations made that led to DES were actually quite controversial! For example, the key size in Lucifer was 128 bits, but only 56 in DES, which worried people who thought it would have been easier to brute force as it was shorter. It’s actually rumoured that the NSA (National Security Agency) did this so that the DES wasn’t too strong for them to break. Another change they added was the inclusion of something called S-boxes, which are effective at protecting against a form of attack called differential cryptanalysis. What I found really cool was that its effectiveness wasn’t talked about until much after, which suggests that the NSA knew about differential cryptanalysis 13 years before this information went public!
The DES is no longer secure enough for modern use, and in 2001 was replaced by the AES, or the Advanced Encryption Standard, which is its direct successor and is still used today. The reason that AES is more secure than DES is that the algorithm itself is more complex, but more importantly it uses longer key lengths. Using keys that are 128, 192, or 256-bit long means that the encryption is much stronger than using the 56-bit DES.
Lucifer, DES, and AES are all symmetric ciphers as well as being block ciphers. This means that the key used to encrypt the plaintext is the same key that is used to decrypt the data. Only some block ciphers are known publicly. DES and AES are the most famous of the lot, but other ones such as IDEA, Twofish, and Serpent exist too. 
As a whole, encrypting with block ciphers is slower as the entire block must be captured to encrypt or decrypt, and if just 1 mistake is made the whole block can be altered. But, they are stronger than other ciphers. Each mode of operation also has its own pros and cons. If each block is encoded by itself then they can be encrypted in parallel (which is faster), but it’s prone to cryptoanalysis as two identical blocks of plaintext would produce two identical blocks of ciphertext, therefore revealing patterns. The other ways are much more complex and take more time to encrypt but are more secure. 
For symmetric encryption to be used, both parties need to agree on the same key for the message to be shared secretly, which is a massive problem. How can the key be transferred securely?
Key Exchange
A year before the implementation of DES, in 1976, another massive breakthrough was made. Researchers Whitfield Hellman and Martin Diffie created the Diffie-Hellman key exchange, which was a method to share encryption and decryption keys safely across an unsecured network. The way it works depends on one-way functions. Typically in maths, most functions are two-way, as using a function on a number is pretty easy to undo. However, Hellman and Diffie found out that while multiplying two prime numbers was very easy, factorising the product down to its primes again was excruciatingly difficult, and the difficulty only increases as the numbers get bigger.
Say Alice and Bob are trying to share a key using the Diffie-Hellman exchange. Firstly, both of them need to execute a function in the form G^a mod P. P must be prime, and G and P are shared publicly so Alice and Bob can agree on them. The numbers are massive (usually 2048 bits) to make it harder to brute force, and they are generated randomly. Alice and Bob each choose different numbers for a, and run their functions. They will get different answers and they share their answers with each other publicly. (This is the public key) Then, Alice and Bob run another function in the form G^a mod P, but G is set to the other person’s answer. The value of a and P stay the same, and Alice and Bob arrive at the same secret answer. The secret answer can then be used to encrypt the message! (This is the private key)
Now, let’s say Eve wanted to find out what the key was. She intercepts their messages, but even though she has the exact information Alice and Bob shared with each other, she doesn’t know what the secret key is unless she solved the original equation, making this key exchange very secure! Modular arithmetic (the mod P part of the equation) is notoriously hard to reverse. If 2048-bit numbers are used, then brute forcing it requires 2^2048 numbers.
Asymmetric Encryption
The Diffie-Hellman key exchange was huge - I mean, any technology created 50 years ago that’s still in use must be pretty good, but it really only shone for sharing keys, not for encryption. For example, the issue with sending communication such as emails using Diffie-Hellman was that both parties needed to be online for a key to be generated as information needs to be mutually shared in the process, so you couldn’t just send an email using it whenever you wanted, which was a shame. However, one particular thing it did lead to was the invention of asymmetric encryption.
In 1977, the idea of public key cryptography (also invented by Diffie) came to fruition in the form of RSA. Named after its creators (Ron Rivest, Adi Shamir, and Leonard Adleman), the RSA works by all users having a public key, which is accessible by everyone, so anyone wanting to send that user a message just needed to search for it. The sender encrypts the message with the recipient’s public key, and then when the recipient comes online they are able to decrypt it with their own private key that’s not shared with anyone. It also uses an one-way function like the Diffie-Hellman exchange, albeit a more complex one. RSA is still used today for things like sending messages or visiting secure websites, and the keys tend to be 2048 or 4096 bits long so that they are hard to break. 1024-bit RSA was disallowed in 2013.
Encrypting via public key and decrypting via private key is great for keeping sensitive information safe, but what if you encrypted with your private key and the message was decrypted with your public key? The purpose of this encryption is to prove the sender is who they say they are - if the public key can’t decrypt the message then either the wrong key was used or the message has been meddled with in transit. To keep the message secure the sender could encrypt with their private key and also the recipient’s public key so only they could decrypt and read it. If the message is particularly long, the digital signature can be applied to a hash of the original message, rather than the whole thing. The RSA was the first to have this dual functionality.
So, there we go - the two main encryption types used today: symmetric and asymmetric. Symmetric encryption is useful for large amounts of data in particular, while asymmetric is more secure, but is slower and requires more resources and therefore can be more expensive. In practice, many secure systems will use both symmetric and asymmetric ciphers. Although, the actual security of a message comes down to the length of the key used - the longer or more complex it is, the more secure the encryption is. As the number of bits increases, the total number of arrangements for these bits increases exponentially. The IBM website states that a 56-bit key could be brute forced in around 400 seconds, a 128-bit key would take 1.872 x10^37 years, while a 256-bit key would take 3.31 x10^56 years.
Going Quantum
It goes without mention as to how important modern cryptography is. These encryption methods are used to keep confidential information such as credit card details, messages, and passwords safe for users like you and me, but also maintains government security on a national level. It’s also vital for cryptocurrency and digital signatures (as mentioned before), as well as browsing secure websites.
A big threat to current cryptographic standards is the development of quantum computing, which are computers based on principles of quantum mechanics. I won’t go into detail on how quantum computers work, but using quantum mechanics they are able to do massive numbers of calculations simultaneously. Although quantum computers already exist, they aren’t powerful or capable enough to threaten our current encryption algorithms yet. But, researchers suggest that they could be able to within a decade. People could use a technique called “store now, decrypt later”, where they keep currently encrypted messages so that they can decrypt them when quantum computers are available. This could cause many problems in the future, particularly if they involve secrets on an international level.
Quantum mechanics can also be used in cryptography as well! Quantum cryptography, originally theorised in 1984 by Charles Bennett and Gilles Brassard, can be used to exchange keys even more securely than Diffie-Hellman, and is called QKD, or Quantum Key Distribution. The reason it’s so incredible is that data that’s secured using it is immune to traditional cryptographic attacks. Now, I’m no quantum physicist (or any type of physicist!) but I will try my best to explain how it works. It works by sending photons, which are light particles, from the sender (eg. Alice) to the receiver (eg. Bob). These photons are sent at different orientations and Bob can measure the photon’s polarisation when he gets them.
Let’s say that photons can be in a vertical, horizontal, or one of the two diagonal orientations. We can pass them through a polarised filter to find out what orientation they are in. The filters are also specifically oriented. A vertical filter would let the vertical photons through, block the horizontal ones, and let the diagonal ones in 50% of the time but at the cost of the ones that pass through being reoriented. Therefore, when a particular photon successfully passes through, it’s impossible to know whether it was originally diagonal or vertical. This is important as it means that it’s possible to detect if someone else has been eavesdropping as the polarisations would have been changed.
Bob can use two measurement bases to receive the photons Alice sent. One will capture vertical and horizontal orientations, and one will capture diagonal ones. Bob has no idea what orientation Alice used for each photon, so he switches between his bases randomly, and will get it wrong some of the time. This is fine, as Alice and Bob then compare to see which ones Bob got right, and the ones he correctly guessed are used as a key (each photon representing 1 bit). The key can then be used for other encryption methods, such as AES.
The reason this works is that if Eve wanted to pry, she has to guess which base to use as well when she intercepts the photons (so she will also make mistakes), but she has no way of checking whether her records are correct or not, unlike Bob. It’s impossible for her to obtain the key as well. What’s more, when she guesses wrong she will change the photon polarisation, so Alice and Bob know that she’s eavesdropping.
Quantum cryptography would have huge security benefits if implemented on a wide scale due to its ability to prevent eavesdroppers, and the fact that it would be resistant to quantum computers. However, it is still in development. One key drawback is the specific infrastructure that is needed, and fiber optic cables have a limited range. This means that the number of destinations the data could be sent to is limited, and the signal cannot be sent to more than 1 recipient at any time.
As well as quantum cryptography, the NIST (The National Institute of Standards and Technology) and other cryptographers are working on other cryptographic algorithms that would stay secure even in the face of quantum computers. Ideas include lattice-based cryptography, hash-based cryptography, and code-based cryptography among others but none of them are at a point where they can actually be implemented yet.
However, one new idea that isn’t post-quantum but is gaining traction is Elliptic Curve Cryptography. Elliptic curve cryptography (ECC) is a form of asymmetric encryption that uses different points on an elliptic curve graph to generate keys in a more efficient manner than traditional methods. It creates shorter encryption keys, which means that less resources are needed while making the keys harder to break simultaneously. Improving the security of current systems just involves lengthening the keys, which slows down the encryption/decryption process, so the fact that ECC doesn’t need to do this gives it a big advantage. It is already used by the US government, iMessage, and Bitcoin, among others. 
Sidenotes
With the maths of these encryption methods being so strong, one key vulnerability is the people that utilise these methods, which is no surprise. Side channel attacks are a way to break cryptography by using information physically leaked from it. One attack, called a TEMPEST attack, is a technique that can pick up electromagnetic transmissions from a device as far as 300m away. These are often done by the FBI, but honestly can be done quite easily by some nerd who has some money to spare and can sit in a car outside your window. By monitoring the radiation emitted from your computer screen, the attacker can spy on you and your data. Another thing that can be monitored is your power consumption. Cryptography is energy intensive, and this attack has been able to recover RSA private keys in testing. Other forms of attacks include measuring amount of time required to encrypt data, which can perhaps be used to find factors or exponents. To combat this, encryption methods can add timing noise as a countermeasure. Or, an attacker can listen to someone type to find out their passwords, but to distinguish different key presses a sophisticated machine learning model is needed. Side channel attacks have actually been around for ages but its use has been severely limited in that the attacker needs to be physically close to the victim. They could get easier with time, however, as smartphones and drones can act as microphones remotely.
Another cool thing I haven’t covered yet are hash functions, which can take in an input and map it to a string of characters that’s random but unique to the original data. The output is called a hash digest or hash value. A good hash function will mean that no two different inputs will have the same hash value, and all outputs are the same length, making it hard to guess original text length. It’s vital for digital signatures and storing passwords securely.
Finally, if anyone managed to get to the end, then thank you! I really love cryptography and I find it astounding that we’ve been able to develop it into such a complex yet intrinsic part of daily life. Honestly, I had so much fun researching for this post! Encryption and cybersecurity and the future of computing is so interesting and I’m really glad I decided to write this :)
Final final note you should totally go and read the Code Book by Simon Singh! Trust me trust me it’s so good...
2 notes · View notes
stemandleafdiagram · 14 days ago
Text
this is me btw
BJ has a very clear understanding of Objectives unfortunately he lacks that level of comprehension when it comes to tactics methods procedures etc. Thus: his Behaviors.
53K notes · View notes
stemandleafdiagram · 21 days ago
Text
Prostheses
(stemandleafdiagram long-form post!)
~2300 words
Recently, my friends and I have been having vigorous debates with everyone we talk to about whether they would rather lose an arm or a leg. We are all lucky enough for this to be a hypothetical, and as the debate went on we decided that prosthetics would be available for whichever limb we chose to lose. This made me realise that I did not know much about artificial limbs and how far they have developed at all, as in my head it was just a blur of vague science-y magic, so I started doing a bit of reading!
First off, a prothesis or prosthetic implant is an artificial device that replaces a missing body part.  They are either created by hand or with a CAD. There is a wide range of prostheses types nowadays, but the story of the development of the first artificial limbs starts in Ancient Egypt, with two artificial toes dating to before 600BC. The first one, known as the “Greville Chester toe”, was made out of cartonnage, and is about 2600-3400 years old! Cartonnage is a type of papier-mâché with glue, linen, and plaster. Different cultures have different ideas about what makes a person “whole”, so it can be uncertain as to whether these early prosthetics were for appearances or for functionality. As it does not bend, however, historians reckon that the Greville Chester toe was probably cosmetic. The second toe, the “Cairo toe”, was the more advanced of the two. Likely to be a practical cosmetic due to its flexibility, this toe was refitted for the wearer multiple times.
Tumblr media
(Credit: Dr Jacky Finch, BBC)
Toes were not the only early prosthetics! Another early prosthetic was the “Capua Leg” from Italy, dating back to 300BC. It was made for a Roman nobleman, and was made out of bronze and hollowed-out wood. The leg was held up with leather straps. Sadly, it was destroyed in WW2 but there is a replica of it at the Science Museum in London. Another famous prosthetic was Götz von Berlichingen’s iron hand. He was a German mercenary who lost his right arm in battle, and had two mechanical iron hands made. The second one in the image below allowed him to hold objects.
Tumblr media
Early artificial limbs were made out of heavy materials, such as wood and metal. They remained rather basic during the Middle Ages, and people who could afford them could have basic hand hooks or peg leg equipped. Knights were often fitted with simple prosthetics that allowed them to hold up a shield or grasp a sword to let them continue a career in fighting, and prosthetics were crafted by tradesmen.
In the 1400s to 1700s, prostheses from countries such as France and Switzerland started to incorporate mobility using hinges, cables, gears, cranks, and springs, allowing hands to have articulated fingers and limbs to rotate or bend. 
The development of artificial limbs really got rolling after the American Civil War in 1860. The huge number of casualties caused the demand for artificial limbs to quadruple, and many veterans started designing their own prosthetics. For example, James Hanger was one of the many soldiers that lost a leg in the fighting, and found it understandably difficult both physically and mentally to resume daily life with a peg leg. As an engineering student, he designed a limb that was able to bend at the knee and the ankle and patented it. It is called the “Hanger Limb”: 
Tumblr media
It was also the first leg to use rubber in the ankle and cushioning in the heel to make wearing it less painful. 
Moving along, the development of artificial limbs was again accelerated by war. A couple years before World War 1, in 1912, David W. Dorrance patented the split-hook design for upper limb prostheses.
Tumblr media
This invention allowed users to continue their labouring jobs as they could now grip and manipulate objects. Additionally, during the war, mass-production of prostheses started to happen for the enormous number of casualties. Towards the end of the war, people started to experiment with materials other than just wood, with metal being used in particular. Using metal instead of wood would mean that repairs (extremely time consuming) were needed less often. The production of limbs was also standardised, meaning that less time was needed to make and repair them, but fitting processes were still done so that each war veteran still got a personalised limb.
In World War 2 there was about half as many amputees from the war. This was due to advances in surgical techniques, improvement in treating infections, and increased availability of blood transfusions after WW1 reduced the need for amputation. However, this doesn’t mean that there wasn’t still a huge demand for artificial limbs. In the UK, Queen Mary’s Hospital in Roehampton became the centre for manufacturing, where tens of thousands of war pensioners were able to get their own prosthesis.
On the topic of prosthetics made for special activities I have to give a special mention to this prosthetic: 
Tumblr media
(Credit: Wellcome Images CC BY 2.0/Wikimedia)
This arm prosthetic limb has widespread fingers, a padded thumb and little finger, and the middle 3 fingers are smaller than normal. If you haven’t guessed, this hand was used for playing piano! The special modifications allowed the user to span 1 octave. The idea of specialised artificial limbs gained traction around 1900, leading to prosthetics like these. Unfortunately, the owner of this hand is unknown, but as a piano player I think this hand is absolutely fantastic!
In the 1970s-1990s, lighter, more durable, and more flexible materials began to be introduced to replace wood and leather in limbs, such as plastics, resins, and carbon fiber. These materials are also able to make the limb mimic the look and feel of natural skin better as well as being light and easy to clean.
As you can imagine, the actual manufacturing of an artificial limb is rather extensive. This is done by a prosthetist, who is responsible for the prescription, design, and management of a prosthetic. Firstly, the residual limb is measured, and so is the user’s body to determine what size limb is needed. Most modern artificial limbs are attached with a belt, cuff, or suction, so a silicone liner is fitted. Next, the socket and plastic parts of the limb are formed, followed by the metal parts.
Although prosthetic limbs can be extremely useful in allowing a person to navigate daily life with much more ease after an amputation, not everyone will benefit from one. For example, whether a prosthetic leg would be beneficial can depend on the person’s health and condition. There needs to be enough soft tissue to cushion the remaining bone, the skin on the residual limb needs to be healthy, and if the amputation was done below the knee it can make the prosthetic easier to use as the knee joint is still intact, require less effort to move. Having a prosthetic leg isn’t easy either. Users may suffer from excessive sweating, or hyperhidrosis, which can lead to skin issues and affect the fit of the prosthesis. The fit of the socket can be affected if the residual limb changes shape after amputation, and weakness in the limb can mean it’s difficult to use the prosthetic for long periods of time.
New technology is constantly improving and developing, and as time goes on prostheses will become much more advanced, allowing users to use their prosthetics with more ease and comfort. Nowadays, prosthetics are advanced, increasingly specialised, and are becoming easier and intuitive to use. We have come a long way!
In particular, bionic and myoelectric limbs are gaining traction. First developed in the USSR in 1958, a myoelectric limb is controlled by contractions of the actual muscle. This is done using electrodes that sit on the skin inside the limb socket that detect electrical signals given off when muscles are contracted. These signals are sent to a controller to be converted into motions. This allows the user to control the limb with their own muscle contractions! They are powered by motors and batteries. These prosthetics have many advantages - they could look more natural, and could be better for people experiencing phantom limb pain (when you feel pain in a limb that has been amputated), and mean that users don’t need to use their other limb to complete tasks, but they may not be as durable. As well as that, myoelectric limbs cannot get wet, which can be a pain, so advancements have been recently made to integrate waterproof materials into the devices.
Tumblr media
(Credit: Open Bionics)
This is the Hero Arm, an example of a myoelectric prosthetic made by Open Bionics. As evident, these new limbs can be designed to look really cool and robotic! If someone had an amputation above their elbow or at shoulder level, they can undergo targeted muscle reinnervation surgery (TMR) which can reroute the nerves that controlled the arm or hand muscles so that they can still control the myoelectric. 
Of course, artificial limbs are not the only prostheses available. Craniofacial protheses are ones for the ears, nose, eyes, teeth or neck, for example. Passive devices can be made for upper limbs, which are mainly for cosmetic purposes or specific activities. Myoelectric technology is also available for prosthetic fingers, hands, and elbows. 
The advancement in 3D printing also mean plenty of good things for artificial limbs, as it allows the rapid production of customised prosthetics so that each one is tailored to the user and their needs. The parts that are 3D printed can be more lightweight as well. The manufacturing process is sped up and costs are reduced, letting a wider range of people have access to more advanced prosthetics. 3D scanners let prosthetists create digital models of amputees’ residual limb to make it easier and quicker to create their prosthesis. 
Microprocessors that adjust the limb’s response based on terrain and activity can also be used. These are used in particular in knee joints, allowing the prosthesis to fit into the wearer’s natural gait, walk down stairs in an easier way, and even letting them run in some cases.
Although the development of these are still in early stages, the use of neural interfaces and AI could improve prostheses greatly. Neural interfaces can establish a connection between the nervous system and the prosthetic, which could mean that wearers will be able to feel sensations through their prosthetics using their own nervous systems! On the other hand, AI and machine learning can adapt to the user’s habits and preferences over time, such as their gait, grip, and other movements in order to improve the limb’s behaviour. One such example of this is Atom Limbs’ prosthetic arm, which is still in development, but allows the user to control it completely with just their muscle contractions. 
The major issue with these modern prosthetic limbs is that they are extremely expensive. Atom Limbs estimated that their product would be priced at around £15,000, which is still considered fairly cheap compared to other bionic arms. The average person would not be able to afford one easily despite how innovative these new designs are.
Finally, what about artificial tissue and organ prostheses - how far are we with those? Well, artificial organs are bioengineered devices or tissues that are added into the human body to replace or augment existing organs, and there have been advancements in the development of many different organs. To help with sight, retinal prostheses can used - tiny microchip devices that allow information to still be obtained even if the original retinal cones and rods have been lost. They are now approved for use, but can only restore a limited amount of eyesight. For those who struggle with hearing, cochlear implants such as bone-anchored hearing aids (BAHA) can be implanted. They are able to carry the sound through the middle ear by sending the vibrations from the skull straight into the cochlea. 
Continuing on, work has also been done for artificial nerves. As you may know, nerve cells in the central nervous system usually don’t regenerate, but nerves in the peripheral nervous system do, albeit slowly. One treatment for injured/lost peripheral nerves uses a nerve connection tube which connects two severed nerves and guides regeneration of the nerve. The development of artificial nerves is very promising for treating diseases such as Alzheimer’s. Next, to treat kidney disease, many people use external filtration systems such as dialysis, which are expensive and inconvenient. Research is being done into bioartificial kidneys instead. This is a device with two components: the haemofilter, which processes incoming blood to create ultrafiltrate, and the bioreactor, which uses kidney cells to process the ultrafiltrate. It can be implanted into the body and is about the size of a coffee cup.
The last artificial organ I wanted to take a look at was the heart. Currently, artificial hearts fall under two main types: the mechanical heart and the heart-lung machine. The mechanical heart is a device that acts as a pump for your blood and replaces the ventricles in your heart. Unfortunately, these are not permanent solutions, and are usually used temporarily by people with severe heart failure or during the wait for a heart transplant. On the other hand, the heart-lung machine (or the cardiopulmonary bypass) is used during open heart surgeries to remove blood from a person’s body, oxygenate it, then pump it back round the body. 
Tumblr media
A mechanical heart! (Credit: Abiomed) 
In conclusion I think it is astounding how cool science and technology is! I love how humanity is able to use so many innovative ideas to help people throughout history and I really hope that this sentiment continues, as although our societies aren’t always made with disabled people in mind, everyone has the right to live in one equally.
0 notes
stemandleafdiagram · 24 days ago
Text
K I’m teaching so this isn’t Dino related but it is science related
Tumblr media
1K notes · View notes
stemandleafdiagram · 1 month ago
Text
dear professor i cant seem to lock in. its so over
43K notes · View notes
stemandleafdiagram · 1 month ago
Text
things that happened to me when i was a woman in STEM:
an advisor humiliated me in front of an entire lab group because of a call I made in his place when he wouldn't reply to my e-mails for months
he later delegated part of my master's thesis work to a 19-year old male undergrad without my approval
a male scientist at a NASA conference looked me up and down and asked when i was graduating and if i was open to a job at his company. right before inquiring what my ethnicity was because i "looked exotic"
a random male member of the public began talking over me and my female advisor, an oceanographer with a pHD and decades of experience, saying he knew more about oceanography than us
things that have happened to me since becoming a man in STEM:
being asked consistently for advice on projects despite being completely new to a position
male colleagues approaching me to drop candid information regarding our partners / higher ups that I was not privy to before
lenience toward my work in a way I haven't experienced before. incredible understanding when I need to take time off to care for my family.
conference rooms go silent when I start talking. no side chatter. I get a baseline level of attention and focus from people that's very unfamiliar and genuinely difficult for me to wrap my head around.
like. yes some PI's will still be assholes regardless of the gender of their subordinates but, I've lived this transition. misogyny in STEM is killing women's careers, and trans men can and do experience male privilege.
80K notes · View notes
stemandleafdiagram · 2 months ago
Text
Tumblr media
dog toys that make you stop dead in the middle of the store and figure out their IUPAC names
442 notes · View notes
stemandleafdiagram · 3 months ago
Text
im just not convinced humans were ever meant to be this busy
24K notes · View notes
stemandleafdiagram · 3 months ago
Text
scientists in the 1990s, putting a Get More Purple gene attached to a harmless plant virus into an already purple petunia: please get more purple
the petunia, sensing an apparent honest to god Get More Purple Disease, using the previously undiscovered RNAi antiviral ability to shut down all other purple genes along with it just in case: you put VIRUS in petunia? you infect her with the More Purple?? oh! oh! her children shall bloom white! jail for mother, jail for mother for One Thousand Years!!!!
251K notes · View notes
stemandleafdiagram · 3 months ago
Text
(stemandleafdiagram first long-form post!!)
~2500 words
A little over a month ago (22 January), it was the 85th anniversary of the first British break of the Enigma cipher! The Enigma was used by the Germans in WW2 to communicate, and with how complex the machine is, it’s no surprise that they thought it was completely impenetrable. The breaking of the cipher was so influential that historians estimate that it shortened the war by at least 2 years. How was such a gargantuan feat achieved?
Firstly, how does the enigma machine and the cipher even work? The Enigma cipher is a polyalphabetic substitution cipher, which means that each letter in the plaintext (uncoded, original message) is replaced with another letter following a pattern. Multiple patterns - alphabets - are used, making polyalphabetic ciphers more secure than monoalphabetic ones.
Tumblr media
This is an enigma machine (image from Wikipedia). You would input your plaintext with the typewriter keyboard, and the ciphertext would be shown through a light board (above the keyboard). The message is encoded through rotors within the machine (seen above the lightboard) which would ensure each letter was encrypted differently by rotating each time a key was pressed. For another layer of encryption, there was also a plugboard, or steckerbrett (at the very front) at the bottom where you could connect two letters to switch them around, which added another layer of encryption to the cipher. Finally, there was a reflector which made it possible for the current to flow backwards down the same path for decryption. Your complete, enciphered message would be sent using Morse code.
The specific way the machine needed to be set up would be changed regularly. Every day, the day key would change, and the day key consisted of:
1. The plugboard settings
2. The order that the rotors were to be placed - even though the number of possible rotors rose to 5 in 1938, only 3 could be placed in the machine at any time
3. Which letter each rotor should start on
4. The position of the rotor notches - these controlled how much the second and third rotor would step
This crucial information would be kept in a codebook that was distributed. However, the Germans increased security by adding a message key for each message. The message key was a random trigram chosen by the operator, such as “CFP”. It would be typed twice in case of an error, encrypted by the day key. Then, the operator would switch to the message key to encrypt the actual message contents. This means that the actual message was encrypted with the message key, not day key.
To decrypt the message, the operator had to set up the machine to the same starting state according to the codebook (allowing the current to follow the same path in reverse), input the ciphertext, and then the plaintext would light up on the lightboard.
If we consider the post-1938 scenario, with the 3 rotors being picked out of 5 possible options, each rotor having 26 positions on them, and the plugboard creating 10 pairings of letters, the military Enigma in use had 158,962,555,217,826,360,000 different settings. Although, it is also worth remembering that there were several different Enigmas in use simultaneously, such as machines that had 4 rotors used by German U-boats (submarines), or different keys being used in different places.
Compared to how well-known some other parts of the Enigma story are, it’s not a widely-known fact that the breakthrough was actually heavily dependent on the work of some Polish mathematicians done several years earlier. Being situated in between Germany on the West and Russia on the East, the Poles felt very vulnerable and threatened by the two powerful countries, and as a result they had been intercepting German communication since the end of WW1. From 1928, they realised that German radio transmissions had started using a new, stronger cipher system as their usual methods of attack were not cracking it, and in 1932 a team of mathematicians was set up by Poland’s cipher bureau to try and break it. This team included Jerzy Różycki, Henryk Zygalski, and Marian Rejewski.
This is Marian Rejewski (what a cool fellow)
Tumblr media
Where to begin breaking this new code? To let Rejewski figure out how the encryption worked from a mathematical perspective, Polish intelligence bought a commercial version of the Enigma. However, the German military had made too many changes to their upgraded military Enigma for this version to be any use. True progress started to be made in 1931, when a failed businessman named Hans-Thilo Schmidt approached the French Secret Service. Thilo-Schmidt worked in the German cryptographic agency, and offered Captain Gustave Bertrand the Enigma’s instruction manual, operating procedures, and codebooks for September and October of 1932. After both the French and the British were not able to make any progress on breaking the cipher even with this new information, Bertrand passed it to the Polish. This, alongside the fact that the first 6 letters of the message was the repeated message key, allowed Rejewski to deduce the internal wheel wiring of the Enigma machine.
The next problem was deducing how the Germans had connected the plugboard to the input rotor. The commercial and military Enigma obviously had their wiring in different orders, but through a lucky guess from Rejewski they figured out that it was wired alphabetically! This was a really important breakthrough as it allowed the Poles to build Enigma replicas, which could then be used to decipher intercepted radio messages. The first German messages were deciphered in January 1933.
The team was very successful - by 1938, they were reading around 75% of intercepted German radio transmissions, and between January 1933 and September 1939, the Polish had deciphered around 100,000 transmissions.
The mathematicians had developed several ways to decipher the messages. One device they used was called a cyclometer, which helped to assess the pattern of the key. Another device called the Zygalski sheets, named after its inventor Henryk Zygalski, was also used. These sheets had a 51x51 grid, and holes were punched where certain pairs of letters were found. They came in sets of 26, and they would be placed on top of each other. After 12 of these sheets had been stacked, if a light shone through 1 specific hole, that signified a key had possibly been found.
A cyclometer
Tumblr media
In September 1938, the Germans stopped using a fixed rotor setting for each day, which made it a lot harder to decipher compared to before, where the same setting was used for all messages that day. To combat this change, a month later Rejewski invented the bomba, or the “cryptologic bomb”, which took advantage of the fact that the message key was repeated at the start. Consisting of 6 Enigma machines, the bomba was a machine that would figure out the day key through brute force (trying each arrangement to see if it works). 6 bombas were used at the same time, each one being a different arrangement of the 3 rotors, and it now took around 2 hours for the bomba to analyse the 105456 possible settings to find the day key. The first bomba was built by AVA Manufacturing Company, which was the same company that built the Enigma copies for the Polish.
Unfortunately, at the end of 1938 the Germans added two more rotors, which meant that 3 rotors out of 5 possible ones were now being used for each message. This made it much harder to find keys, and now 60 bombas were needed to find the day key instead of just 6. A month later they increased the number of plugboard leads from 6 to 10. These changes meant that deciphering transmissions was beyond what the Polish Intelligence could achieve with their limited resources. 
On the brink of war, in July 1939, the Polish had to pass on their work to the French and British in fear of being invaded. As the Poles had been keeping their successes a complete secret, their achievements astonished the other two countries. The French and British were given copies of the Enigma machine, the wiring of the rotors, as well as the details of the Bombas, cyclometers, and Zygalski sheets.
The baton had been passed to the British and Bletchley Park. Located at Bletchley Park was Ultra, which was the British project that was tasked with decrypting enemy communications. The British began to read Enigma transmissions in August 1939. With access to more resources, the British were able to break the day keys faster, and had found more ways of exploiting the Enigma’s weaknesses, which was humans. You see, for each message key the German operator encoding the message was supposed to choose 3 random letters, but sometimes they got lazy and chose names, consecutive letters on the keyboard, or rude words. Or, they would use the same message key for multiple messages. One German operator used the initials of his girlfriend’s name, CIL, leading to these vulnerabilities being named cillies.
Other weaknesses include:
The people who compiled the code book did not let any of the rotors to be in the same position for 2 days in a row, which reduced the number of rotor arrangements that needed to be tested every day.
The plugboard could not swap two adjacent letters.
The reflector made the Enigma self-reciprocal, but also made it so that no letter could ever be encrypted to itself. This was vital to the breaking of the cipher, and was especially useful when the cryptographers had a crib. A crib was a piece of the plaintext that was likely to be part of the encrypted message, such as a name or a common phrase. Messages would often contain a weather report or the phrase “Heil Hitler”, for example. By comparing the crib with each possible phrase in the ciphertext and seeing whether any letters matched up, possible positions of the plaintext could be discarded if any letters did match as the letter in the crib could not be encoded into itself. Bletchley Park called this a crash.
Alan Turing :)
Tumblr media
This is where Alan Turing comes in! He was one of the mathematicians employed to work at Bletchley Park. Like Rejewski, Turing built his own machine to help decipher the cipher, which consisted of essentially 36 Enigma machines. This was named the Bombe (after Rejewski’s Bomba) However, Turing’s machine differed to the Polish one in that it would look for the rotor settings that could match the ciphertext with a plaintext crib, rather than the correct rotor setting based on a repeated key. In a more precise manner, Turing’s machine would discard the arrangements that were incorrect. If the plaintext letter was assumed to be A, for example, and the corresponding ciphertext letter was W, then the machine would disregard any arrangement that did not let A connect to W. It did this by flowing an electric current at each rotor position, and if it didn’t lead to a logical contradiction it would stop, letting the operator write down the settings. The circuit also worked around the plugboard layer of encryption. The drums on the machine were colour-coded to represent which Enigma rotor they followed. (In order from I → VIII was red, maroon, green, yellow, brown, blue, black, silver). This sort of attack (using cribs) is called a known plaintext attack.
The machine itself was about 2 metres wide, 2 metres tall, and weighed around a ton. It contained around 100 rotating drums and 16 km of wire. The first bombe was called Victory, but it was too slow. The second one, named Agnus Dei, or Agnes, was a new design and was installed in August 1940.
Turing’s Bombe Machine (photo: Sarah Hartwell)
Tumblr media
At the start of the war, only about 200 workers were situated at Bletchley. However, towards the end of 1944 the number of workers had increased to almost 9000, including chess or crossword experts, scientists, and mathematicians. Quickly the manor house became too small to house all the work being done, so additional buildings called huts were built outside. One of the most famous huts is Hut 8, as this is where Turing worked.
The machines at Bletchley Park were operated by members of the Women’s Royal Naval Service, or Wrens. Women made up ¾ of the workforce there. When the machine stopped after it found a match (found a stop), a Wren would write down the rotor settings and pass them to another Wren to test it on a checking machine, and then she would restart the bombe to look for any other solutions. Once the rotor settings had been checked, they would pass it to a cryptanalyst. When the correct setting had been found, the Wrens would be alerted and told to move onto the next key. Work with the Bombes was tough - the women had to be quick and accurate over long shifts, and the rooms they worked in were dark and stuffy.
With all this work going on to decipher the Enigma, how come the Germans didn’t think that it was being broken? Well, the Allies had to try and keep it secret as much as possible. The Germans were already convinced due to the sheer number of different arrangements the Enigma provided, and so all the Allies had to do was to keep the illusion going. For example, before launching an attack over a specific area mentioned by an Enigma message, the British would often fly an aircraft over it, so that when they did attack, the Germans would believe they got spotted, and not that the Enigma gave them away. They would also plant decoy intelligence, and ensured that their victories did not arouse suspicions by not being too greedy.
As the Germans and the Japanese both used the Enigma, the cryptography work done by the British also contributed to the Allied victories in the Pacific. Although, due to how sensitive the information was, it took a lot of time for a trusting relationship to be developed between America and Britain. The US military also contributed a lot in decoding Enigma messages, and ended up building their own versions of the Bombes. In February 1942, the German U-boats added another rotor to their Enigma as the Admiral was convinced their encryption system had been cracked. Luckily, one submarine accidentally sent a message with the 4th rotor positioned wrong, and sent the same message again but with the rotor placed correctly. This mistake allowed the Allies to figure out the wiring of the 4th rotor and continue deciphering German communications!
Finally, here are some photos I took when I went to Bletchley Park :)
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
I heavily regret not taking more photos of the actual writing and explanation bits, so maybe I will get to visit again in the future!
I used many sources to write this post but in particular I loved reading these:
You should go read them as well if you are interested!!
1 note · View note
stemandleafdiagram · 3 months ago
Text
NASA released the clearest pictures yet of our neighbours in the solar system
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Oh and of course us
Tumblr media
Honourable mention
Tumblr media
85K notes · View notes
stemandleafdiagram · 4 months ago
Text
GIRL HELP I AM UNDERGOING MITOSIS
68K notes · View notes