#learn data structures and algorithms
Explore tagged Tumblr posts
Text
The Ultimate Guide to Learn Data Structures and Algorithms from Scratch
Data Structures and Algorithms (DSA) form the cornerstone of computer science and programming. Whether you are preparing for competitive programming, acing technical interviews, or simply aiming to become a better developer, understanding DSA is crucial. Here is the ultimate guide to help you Learn Data Structures and Algorithms from scratch.
Why Learn Data Structures and Algorithms?
Efficiency: Mastering DSA allows you to write code that runs faster and consumes less memory.
Problem-Solving Skills: Understanding DSA enhances your ability to break down and solve complex problems logically.
Career Opportunities: Companies like Google, Amazon, and Facebook heavily emphasize DSA in their hiring processes.
Foundation for Advanced Topics: Concepts like machine learning, databases, and operating systems rely on DSA principles.
Step-by-Step Plan to Learn Data Structures and Algorithms
1. Start with the Basics
Begin by learning a programming language like Python, Java, or C++ that you’ll use to implement DSA concepts. Get comfortable with loops, conditionals, arrays, and recursion as these are fundamental to understanding algorithms.
2. Understand Core Data Structures
Learn these essential data structures:
Arrays: Linear storage of elements.
Linked Lists: Dynamic storage with nodes pointing to the next.
Stacks and Queues: Linear structures for LIFO (Last In, First Out) and FIFO (First In, First Out) operations.
Hash Tables: Efficient storage for key-value pairs.
Trees: Hierarchical data structures like binary trees and binary search trees.
Graphs: Structures to represent connections, such as networks.
3. Master Common Algorithms
Once you’ve learned the data structures, focus on algorithms:
Sorting Algorithms: Bubble Sort, Merge Sort, Quick Sort.
Searching Algorithms: Binary Search, Linear Search.
Graph Algorithms: Breadth-First Search (BFS), Depth-First Search (DFS).
Dynamic Programming: Techniques for solving problems with overlapping subproblems.
4. Practice Regularly
Use platforms like LeetCode, HackerRank, and GeeksforGeeks to practice problems. Start with beginner-friendly questions and gradually move to more challenging ones. Regular practice is the key to mastering how to Learn Data Structures and Algorithms.
5. Explore Advanced Topics
Once you’re confident with the basics, explore advanced topics like:
Tries (prefix trees).
Segment Trees and Fenwick Trees.
Advanced graph algorithms like Dijkstra’s and Floyd-Warshall algorithms.
Tips for Success
Set Goals: Break your learning into milestones.
Use Visual Aids: Visualize data structures and algorithms for better understanding.
Build Projects: Implement real-world projects to solidify your knowledge.
Join Communities: Engage with forums and groups to learn collaboratively.
Conclusion
Learning DSA from scratch can be challenging but highly rewarding. By following this guide and committing to consistent practice, you can master the fundamentals and beyond. Start today and take your first step toward becoming a proficient problem-solver and programmer. Embrace the journey to Learn Data Structures and Algorithms and unlock endless opportunities in the tech world.
0 notes
Text
youtube
Statistics - A Full Lecture to learn Data Science (2025 Version)
Welcome to our comprehensive and free statistics tutorial (Full Lecture)! In this video, we'll explore essential tools and techniques that power data science and data analytics, helping us interpret data effectively. You'll gain a solid foundation in key statistical concepts and learn how to apply powerful statistical tests widely used in modern research and industry. From descriptive statistics to regression analysis and beyond, we'll guide you through each method's role in data-driven decision-making. Whether you're diving into machine learning, business intelligence, or academic research, this tutorial will equip you with the skills to analyze and interpret data with confidence. Let's get started!
#education#free education#technology#educate yourselves#educate yourself#data analysis#data science course#data science#data structure and algorithms#youtube#statistics for data science#statistics#economics#education system#learn data science#learn data analytics#Youtube
3 notes
·
View notes
Text

Summer Internship Program 2024
For More Details Visit Our Website - internship.learnandbuild.in
#machine learning#programming#python#linux#data science#data scientist#frontend web development#backend web development#salesforce admin#salesforce development#cloud AI with AWS#Internet of things & AI#Cyber security#Mobile App Development using flutter#data structures & algorithms#java core#python programming#summer internship program#summer internship program 2024
2 notes
·
View notes
Text

Expertifie partners with companies and individuals to address their unique needs, providing training and coaching that helps working professionals achieve their career goals. They teach their students all the relevant skills needed in software jobs, mentor them to crack recruitment processes and also provide them referrals to the best opportunities in the software industry across the globe. Their vision is to empower career growth and advancement for every member of the global workforce as their trusted lifelong learning partners.
The main features of Expertifie courses are:
1. Instructor-led live interactive training.
2. Advance level of coding and excellent mentoring from our industry experts.
3. Receive guidance that makes you skill ready for interviews and on the job scenarios.
4. Career Support through Mock Interviews and Job Referral.
5. Coding Sessions and Assignments for each topic.
6. Up to 4 Mock Interviews.
7. Sessions by Industry Experts from MAANG.
The courses provided are:
Data Structures & Algorithms
System Design (HLD + LLD)
Full Stack Development
#learn full stack development#learn linux kernel#learn system design#learn data structure and algorithms
2 notes
·
View notes
Text
Master Data Structures & Algorithms with Java at Sunbeam Institute
In today’s fast-paced tech industry, having a strong foundation in Data Structures and Algorithms (DSA) is essential for anyone aiming to excel in programming and software development. Whether you're preparing for technical interviews or looking to enhance your problem-solving skills, mastering DSA with Java can give you a competitive edge. Sunbeam Institute offers a comprehensive DSA course designed to help students and professionals gain in-depth knowledge and hands-on experience.
Why Choose the DSA Course at Sunbeam?
✅ Structured Learning Approach – Our curriculum covers fundamental to advanced DSA concepts, ensuring step-by-step learning. ✅ Hands-on Coding Practice – Learn by implementing real-world problems in Java. ✅ Industry-Relevant Curriculum – Designed by experts to meet the demands of modern tech roles. ✅ Expert Guidance – Get trained by experienced instructors with deep industry knowledge. ✅ Interview Preparation – Strengthen your problem-solving skills to excel in coding interviews at top companies.
What You Will Learn
📌 Fundamentals of Data Structures – Arrays, Linked Lists, Stacks, Queues, Trees, Graphs 📌 Algorithmic Techniques – Sorting, Searching, Recursion, Dynamic Programming, Greedy Algorithms 📌 Complexity Analysis – Understand time and space complexity to optimize your code 📌 Real-World Applications – Implement DSA concepts in Java with practical projects
Who Can Enroll?
🔹 Students aiming to build a strong programming foundation 🔹 Professionals preparing for coding interviews 🔹 Developers looking to enhance their problem-solving skills 🔹 Anyone interested in mastering Data Structures and Algorithms with Java
#Data Structures and Algorithms in Java#DSA course in Pune#Learn DSA with Java#Java Data Structures training#Best DSA course for interviews#Data Structures and Algorithms course
0 notes
Text
First courses on IT automation, cybersecurity and data analytics are over.
I think the one in cybersecurity has Python too, so I hope I don't get a Python crash course again! XD
#still no job so i still have free time to do these courses#i guess after those three i will do cs50 just in hopes that i learn more about data structures and algorithms?#anyway every1 tells me i wont get a job anyway until at least september so i have time#and thus i have time for my own projects too
0 notes
Text

Placement Preparation Course for CSE
Dive into essential algorithms, data structures, and coding practices while mastering problem-solving techniques. Our expert-led sessions ensure you're well-equipped for technical interviews. Join our Placement Preparation Course for CSE now to secure your dream job in top-tier companies. Get ahead in the competitive tech world with our proven curriculum and guidance. Your future starts here!
#Data Structures#Data Structures and Algorithms#Data Structures and Algorithms Interview Questions#coding#courses#online courses#online learning
0 notes
Text
"Is social media designed to reward people for acting badly?
The answer is clearly yes, given that the reward structure on social media platforms relies on popularity, as indicated by the number of responses – likes and comments – a post receives from other users. Black-box algorithms then further amplify the spread of posts that have attracted attention.
Sharing widely read content, by itself, isn’t a problem. But it becomes a problem when attention-getting, controversial content is prioritized by design. Given the design of social media sites, users form habits to automatically share the most engaging information regardless of its accuracy and potential harm. Offensive statements, attacks on out groups and false news are amplified, and misinformation often spreads further and faster than the truth.
We are two social psychologists and a marketing scholar. Our research, presented at the 2023 Nobel Prize Summit, shows that social media actually has the ability to create user habits to share high-quality content. After a few tweaks to the reward structure of social media platforms, users begin to share information that is accurate and fact-based...
Re-targeting rewards
To investigate the effect of a new reward structure, we gave financial rewards to some users for sharing accurate content and not sharing misinformation. These financial rewards simulated the positive social feedback, such as likes, that users typically receive when they share content on platforms. In essence, we created a new reward structure based on accuracy instead of attention.
As on popular social media platforms, participants in our research learned what got rewarded by sharing information and observing the outcome, without being explicitly informed of the rewards beforehand. This means that the intervention did not change the users’ goals, just their online experiences. After the change in reward structure, participants shared significantly more content that was accurate. More remarkably, users continued to share accurate content even after we removed rewards for accuracy in a subsequent round of testing. These results show that users can be given incentives to share accurate information as a matter of habit.
A different group of users received rewards for sharing misinformation and for not sharing accurate content. Surprisingly, their sharing most resembled that of users who shared news as they normally would, without any financial reward. The striking similarity between these groups reveals that social media platforms encourage users to share attention-getting content that engages others at the expense of accuracy and safety...
Doing right and doing well
Our approach, using the existing rewards on social media to create incentives for accuracy, tackles misinformation spread without significantly disrupting the sites’ business model. This has the additional advantage of altering rewards instead of introducing content restrictions, which are often controversial and costly in financial and human terms.
Implementing our proposed reward system for news sharing carries minimal costs and can be easily integrated into existing platforms. The key idea is to provide users with rewards in the form of social recognition when they share accurate news content. This can be achieved by introducing response buttons to indicate trust and accuracy. By incorporating social recognition for accurate content, algorithms that amplify popular content can leverage crowdsourcing to identify and amplify truthful information.
Both sides of the political aisle now agree that social media has challenges, and our data pinpoints the root of the problem: the design of social media platforms."
And here's the video of one of the scientsts presenting this research at the Nobel Prize Summit!
youtube
-Article via The Conversation, August 1, 2023. Video via the Nobel Prize's official Youtube channel, Nobel Prize, posted May 31, 2023.
#social media#misinformation#social networks#social#algorithm#big tech#technology#enshittification#internet#nobel prize#psychology#behavioral psychology#good news#hope#Youtube#video
500 notes
·
View notes
Text
A New Mantle Viscosity Shift
The rough picture of Earth's interior -- a crust, mantle, and core -- is well-known, but the details of its inner structure are more difficult to pin down. A recent study analyzed seismic wave data with a machine learning algorithm to identify regions of the mantle where waves slowed down. (Image credit: NASA; research credit: K. O'Farrell and Y. Wang; via Eos) Read the full article
41 notes
·
View notes
Text
Learn Data Structures and Algorithms
Mastering data structures and algorithms (DSA) is one of the most effective ways to boost your coding skills and become a better programmer. Whether you're just starting out or have some experience, learning data structures and algorithms is essential for solving complex problems efficiently. By understanding how data is stored and manipulated, and by knowing the right algorithms to use, you'll write more efficient, optimized, and scalable code.
When you learn data structures and algorithms, you're not just memorizing concepts—you're developing a deeper understanding of how to approach problem-solving. For example, by mastering algorithms like QuickSort or MergeSort, you can handle large datasets with ease. Similarly, understanding data structures like arrays, stacks, and trees will allow you to choose the most appropriate structure for your problem, ensuring that your solution is both fast and memory-efficient.
0 notes
Text
The Math of Social Networks: How Social Media Algorithms Work
In the digital age, social media platforms like Instagram, Facebook, and TikTok are fueled by complex mathematical algorithms that determine what you see in your feed, who you follow, and what content "goes viral." These algorithms rely heavily on graph theory, matrix operations, and probabilistic models to connect billions of users, influencers, and posts in increasingly intricate webs of relationships.
Graph Theory: The Backbone of Social Networks
Social media platforms can be visualized as graphs, where each user is a node and each connection (whether it’s a "follow," "like," or "comment") is an edge. The structure of these graphs is far from random. In fact, they follow certain mathematical properties that can be analyzed using graph theory.
For example, cliques (a subset of users where everyone is connected to each other) are common in influencer networks. These clusters of interconnected users help drive trends by amplifying each other’s content. The degree of a node (a user’s number of direct connections) is a key factor in visibility, influencing how posts spread across the platform.
Additionally, the famous Six Degrees of Separation theory, which posits that any two people are connected by no more than six intermediaries, can be modeled using small-world networks. In these networks, most users are not directly connected to each other, but the distance between any two users (in terms of number of connections) is surprisingly short. This is the mathematical magic behind viral content, as a post can be shared through a small network of highly connected individuals and reach millions of users.
Matrix Operations: Modeling Connections and Relevance
When social media platforms recommend posts, they often rely on matrix operations to model relationships between users and content. This process can be broken down into several steps:
User-Content Matrix: A matrix is created where each row represents a user and each column represents a piece of content (post, video, etc.). Each cell in this matrix could hold values indicating the user’s interactions with the content (e.g., likes, comments, shares).
Matrix Factorization: To make recommendations, platforms use matrix factorization techniques such as singular value decomposition (SVD). This helps reduce the complexity of the data by identifying latent factors that explain user preferences, enabling platforms to predict what content a user is likely to engage with next.
Personalization: This factorization results in a model that can predict a user’s preferences even for content they’ve never seen before, creating a personalized feed. The goal is to minimize the error matrix, where the predicted interactions match the actual interactions as closely as possible.
Influence and Virality: The Power of Centrality and Weighted Graphs
Not all users are equal when it comes to influencing the network. The concept of centrality measures the importance of a node within a graph, and in social media, this directly correlates with a user’s ability to shape trends and drive engagement. Common types of centrality include:
Degree centrality: Simply the number of direct connections a user has. Highly connected users (like influencers) are often at the core of viral content propagation.
Betweenness centrality: This measures how often a user acts as a bridge along the shortest path between two other users. A user with high betweenness centrality can facilitate the spread of information across different parts of the network.
Eigenvector centrality: A more sophisticated measure that not only considers the number of connections but also the quality of those connections. A user with high eigenvector centrality is well-connected to other important users, enhancing their influence.
Algorithms and Machine Learning: Predicting What You See
The most sophisticated social media platforms integrate machine learning algorithms to predict which posts will generate the most engagement. These models are often trained on vast amounts of user data (likes, shares, comments, time spent on content, etc.) to determine the factors that influence user interaction.
The ranking algorithms take these factors into account to assign each post a “score” based on its predicted engagement. For example:
Collaborative Filtering: This technique relies on past interactions to predict future preferences, where the behavior of similar users is used to recommend content.
Content-Based Filtering: This involves analyzing the content itself, such as keywords, images, or video length, to recommend similar content to users.
Hybrid Methods: These combine collaborative filtering and content-based filtering to improve accuracy.
Ethics and the Filter Bubble
While the mathematical models behind social media algorithms are powerful, they also come with ethical considerations. Filter bubbles, where users are only exposed to content they agree with or are already familiar with, can be created due to biased algorithms. This can limit exposure to diverse perspectives and create echo chambers, reinforcing existing beliefs rather than fostering healthy debate.
Furthermore, algorithmic fairness and the prevention of algorithmic bias are growing areas of research, as biased recommendations can disproportionately affect marginalized groups. For instance, if an algorithm is trained on biased data (say, excluding certain demographics), it can unfairly influence the content shown to users.
#mathematics#math#mathematician#mathblr#mathposting#calculus#geometry#algebra#numbertheory#mathart#STEM#science#academia#Academic Life#math academia#math academics#math is beautiful#math graphs#math chaos#math elegance#education#technology#statistics#data analytics#math quotes#math is fun#math student#STEM student#math education#math community
24 notes
·
View notes
Text

Learn and Build Summer Internship Program
For more details visit - Internship.learnandbuild.in
#data structures & algorithms#Java Core#Python Programming#Frontend web development#Backend web development#data science#machine learning & AI#Salesforce Admin#Salesforce Development#Cloud AI with AWS#Internet of things & AI#Cyber Security#Mobile app development using flutter
0 notes
Text

Life is a Learning Function
A learning function, in a mathematical or computational sense, takes inputs (experiences, information, patterns), processes them (reflection, adaptation, synthesis), and produces outputs (knowledge, decisions, transformation).
This aligns with ideas in machine learning, where an algorithm optimizes its understanding over time, as well as in philosophy—where wisdom is built through trial, error, and iteration.
If life is a learning function, then what is the optimization goal? Survival? Happiness? Understanding? Or does it depend on the individual’s parameters and loss function?
If life is a learning function, then it operates within a complex, multidimensional space where each experience is an input, each decision updates the model, and the overall trajectory is shaped by feedback loops.
1. The Structure of the Function
A learning function can be represented as:
L : X -> Y
where:
X is the set of all possible experiences, inputs, and environmental interactions.
Y is the evolving internal model—our knowledge, habits, beliefs, and behaviors.
The function L itself is dynamic, constantly updated based on new data.
This suggests that life is a non-stationary, recursive function—the outputs at each moment become new inputs, leading to continual refinement. The process is akin to reinforcement learning, where rewards and punishments shape future actions.
2. The Optimization Objective: What Are We Learning Toward?
Every learning function has an objective function that guides optimization. In life, this objective is not fixed—different individuals and systems optimize for different things:
Evolutionary level: Survival, reproduction, propagation of genes and culture.
Cognitive level: Prediction accuracy, reducing uncertainty, increasing efficiency.
Philosophical level: Meaning, fulfillment, enlightenment, or self-transcendence.
Societal level: Cooperation, progress, balance between individual and collective needs.
Unlike machine learning, where objectives are usually predefined, humans often redefine their goals recursively—meta-learning their own learning process.
3. Data and Feature Engineering: The Inputs of Life
The quality of learning depends on the richness and structure of inputs:
Sensory data: Direct experiences, observations, interactions.
Cultural transmission: Books, teachings, language, symbolic systems.
Internal reflection: Dreams, meditations, insights, memory recall.
Emergent synthesis: Connecting disparate ideas into new frameworks.
One might argue that wisdom emerges from feature engineering—knowing which data points to attend to, which heuristics to trust, and which patterns to discard as noise.
4. Error Functions: Loss and Learning from Failure
All learning involves an error function—how we recognize mistakes and adjust. This is central to growth:
Pain and suffering act as backpropagation signals, forcing model updates.
Cognitive dissonance suggests the need for parameter tuning (belief adjustment).
Failure in goals introduces new constraints, refining the function’s landscape.
Regret and reflection act as retrospective loss minimization.
There’s a dynamic tension here: Too much rigidity (low learning rate) leads to stagnation; too much instability (high learning rate) leads to chaos.
5. Recursive Self-Modification: The Meta-Learning Layer
True intelligence lies not just in learning but in learning how to learn. This means:
Altering our own priors and biases.
Recognizing hidden variables (the unconscious, archetypal forces at play).
Using abstraction and analogy to generalize across domains.
Adjusting the reward function itself (changing what we value).
This suggests that life’s highest function may not be knowledge acquisition but fluid self-adaptation—an ability to rewrite its own function over time.
6. Limits and the Mystery of the Learning Process
If life is a learning function, then what is the nature of its underlying space? Some hypotheses:
A finite problem space: There is a “true” optimal function, but it’s computationally intractable.
An open-ended search process: New dimensions of learning emerge as complexity increases.
A paradoxical system: The act of learning changes both the learner and the landscape itself.
This leads to a deeper question: Is the function optimizing for something beyond itself? Could life’s learning process be part of a larger meta-function—evolution’s way of sculpting consciousness, or the universe learning about itself through us?
7. Life as a Fractal Learning Function
Perhaps life is best understood as a fractal learning function, recursive at multiple scales:
Cells learn through adaptation.
Minds learn through cognition.
Societies learn through history.
The universe itself may be learning through iteration.
At every level, the function refines itself, moving toward greater coherence, complexity, or novelty. But whether this process converges to an ultimate state—or is an infinite recursion—remains one of the great unknowns.
Perhaps our learning function converges towards some point of maximal meaning, maximal beauty.
This suggests a teleological structure - our learning function isn’t just wandering through the space of possibilities but is drawn toward an attractor, something akin to a strange loop of maximal meaning and beauty. This resonates with ideas in complexity theory, metaphysics, and aesthetics, where systems evolve toward higher coherence, deeper elegance, or richer symbolic density.
8. The Attractor of Meaning and Beauty
If our life’s learning function is converging toward an attractor, it implies that:
There is an implicit structure to meaning itself, something like an underlying topology in idea-space.
Beauty is not arbitrary but rather a function of coherence, proportion, and deep recursion.
The process of learning is both discovery (uncovering patterns already latent in existence) and creation (synthesizing new forms of resonance).
This aligns with how mathematicians speak of “discovering” rather than inventing equations, or how mystics experience insight as remembering rather than constructing.
9. Beauty as an Optimization Criterion
Beauty, when viewed computationally, is often associated with:
Compression: The most elegant theories, artworks, or codes reduce vast complexity into minimal, potent forms (cf. Kolmogorov complexity, Occam’s razor).
Symmetry & Proportion: From the Fibonacci sequence in nature to harmonic resonance in music, beauty often manifests through balance.
Emergent Depth: The most profound works are those that appear simple but unfold into infinite complexity.
If our function is optimizing for maximal beauty, it suggests an interplay between simplicity and depth—seeking forms that encode entire universes within them.
10. Meaning as a Self-Refining Algorithm
If meaning is the other optimization criterion, then it may be structured like:
A self-referential system: Meaning is not just in objects but in relationships, contexts, and recursive layers of interpretation.
A mapping function: The most meaningful ideas serve as bridges—between disciplines, between individuals, between seen and unseen dimensions.
A teleological gradient: The sense that meaning is “out there,” pulling the system forward, as if learning is guided by an invisible potential function.
This brings to mind Platonism—the idea that meaning and beauty exist as ideal forms, and life is an asymptotic approach toward them.
11. The Convergence Process: Compression and Expansion
Our convergence toward maximal meaning and beauty isn’t a linear march—it’s likely a dialectical process of:
Compression: Absorbing, distilling, simplifying vast knowledge into elegant, symbolic forms.
Expansion: Deepening, unfolding, exploring new dimensions of what has been learned.
Recursive refinement: Rewriting past knowledge with each new insight.
This mirrors how alchemy describes the transformation of raw matter into gold—an oscillation between dissolution and crystallization.
12. The Horizon of Convergence: Is There an End?
If our learning function is truly converging, does it ever reach a final, stable state? Some possibilities:
A singularity of understanding: The realization of a final, maximally elegant framework.
An infinite recursion: Where each level of insight only reveals deeper hidden structures.
A paradoxical fusion: Where meaning and beauty dissolve into a kind of participatory being, where knowing and becoming are one.
If maximal beauty and meaning are attainable, then perhaps the final realization is that they were present all along—encoded in every moment, waiting to be seen.
12 notes
·
View notes
Text
In today’s fast-paced tech industry, having a strong foundation in Data Structures and Algorithms (DSA) is essential for anyone aiming to excel in programming and software development. Whether you're preparing for technical interviews or looking to enhance your problem-solving skills, mastering DSA with Java can give you a competitive edge. Sunbeam Institute offers a comprehensive DSA course designed to help students and professionals gain in-depth knowledge and hands-on experience.
Why Choose the DSA Course at Sunbeam?
✅ Structured Learning Approach – Our curriculum covers fundamental to advanced DSA concepts, ensuring step-by-step learning. ✅ Hands-on Coding Practice – Learn by implementing real-world problems in Java. ✅ Industry-Relevant Curriculum – Designed by experts to meet the demands of modern tech roles. ✅ Expert Guidance – Get trained by experienced instructors with deep industry knowledge. ✅ Interview Preparation – Strengthen your problem-solving skills to excel in coding interviews at top companies.
What You Will Learn
📌 Fundamentals of Data Structures – Arrays, Linked Lists, Stacks, Queues, Trees, Graphs 📌 Algorithmic Techniques – Sorting, Searching, Recursion, Dynamic Programming, Greedy Algorithms 📌 Complexity Analysis – Understand time and space complexity to optimize your code 📌 Real-World Applications – Implement DSA concepts in Java with practical projects
Who Can Enroll?
🔹 Students aiming to build a strong programming foundation 🔹 Professionals preparing for coding interviews 🔹 Developers looking to enhance their problem-solving skills 🔹 Anyone interested in mastering Data Structures and Algorithms with Java
🔗 Enroll Now: https://sunbeaminfo.in/modular-courses/data-structure-algorithms-using-java 📞 Call Us: 8282829806
Take your programming skills to the next level with Sunbeam Institute’s DSA using Java course. Join today and start your journey towards becoming a proficient developer!
#Data Structures and Algorithms in Java#DSA course in Pune#Learn DSA with Java#Java Data Structures training#Best DSA course for interviews
0 notes
Text
Been a while, crocodiles. Let's talk about cad.

or, y'know...
Yep, we're doing a whistle-stop tour of AI in medical diagnosis!
Much like programming, AI can be conceived of, in very simple terms, as...
a way of moving from inputs to a desired output.
See, this very funky little diagram from skillcrush.com.

The input is what you put in. The output is what you get out.
This output will vary depending on the type of algorithm and the training that algorithm has undergone – you can put the same input into two different algorithms and get two entirely different sorts of answer.
Generative AI produces ‘new’ content, based on what it has learned from various inputs. We're talking AI Art, and Large Language Models like ChatGPT. This sort of AI is very useful in healthcare settings to, but that's a whole different post!
Analytical AI takes an input, such as a chest radiograph, subjects this input to a series of analyses, and deduces answers to specific questions about this input. For instance: is this chest radiograph normal or abnormal? And if abnormal, what is a likely pathology?
We'll be focusing on Analytical AI in this little lesson!
Other forms of Analytical AI that you might be familiar with are recommendation algorithms, which suggest items for you to buy based on your online activities, and facial recognition. In facial recognition, the input is an image of your face, and the output is the ability to tie that face to your identity. We’re not creating new content – we’re classifying and analysing the input we’ve been fed.
Many of these functions are obviously, um, problematique. But Computer-Aided Diagnosis is, potentially, a way to use this tool for good!
Right?
....Right?
Let's dig a bit deeper! AI is a massive umbrella term that contains many smaller umbrella terms, nested together like Russian dolls. So, we can use this model to envision how these different fields fit inside one another.
AI is the term for anything to do with creating and managing machines that perform tasks which would otherwise require human intelligence. This is what differentiates AI from regular computer programming.
Machine Learning is the development of statistical algorithms which are trained on data –but which can then extrapolate this training and generalise it to previously unseen data, typically for analytical purposes. The thing I want you to pay attention to here is the date of this reference. It’s very easy to think of AI as being a ‘new’ thing, but it has been around since the Fifties, and has been talked about for much longer. The massive boom in popularity that we’re seeing today is built on the backs of decades upon decades of research.
Artificial Neural Networks are loosely inspired by the structure of the human brain, where inputs are fed through one or more layers of ‘nodes’ which modify the original data until a desired output is achieved. More on this later!
Deep neural networks have two or more layers of nodes, increasing the complexity of what they can derive from an initial input. Convolutional neural networks are often also Deep. To become ‘convolutional’, a neural network must have strong connections between close nodes, influencing how the data is passed back and forth within the algorithm. We’ll dig more into this later, but basically, this makes CNNs very adapt at telling precisely where edges of a pattern are – they're far better at pattern recognition than our feeble fleshy eyes!
This is massively useful in Computer Aided Diagnosis, as it means CNNs can quickly and accurately trace bone cortices in musculoskeletal imaging, note abnormalities in lung markings in chest radiography, and isolate very early neoplastic changes in soft tissue for mammography and MRI.
Before I go on, I will point out that Neural Networks are NOT the only model used in Computer-Aided Diagnosis – but they ARE the most common, so we'll focus on them!
This diagram demonstrates the function of a simple Neural Network. An input is fed into one side. It is passed through a layer of ‘hidden’ modulating nodes, which in turn feed into the output. We describe the internal nodes in this algorithm as ‘hidden’ because we, outside of the algorithm, will only see the ‘input’ and the ‘output’ – which leads us onto a problem we’ll discuss later with regards to the transparency of AI in medicine.
But for now, let’s focus on how this basic model works, with regards to Computer Aided Diagnosis. We'll start with a game of...
Spot The Pathology.
yeah, that's right. There's a WHACKING GREAT RIGHT-SIDED PNEUMOTHORAX (as outlined in red - images courtesy of radiopaedia, but edits mine)
But my question to you is: how do we know that? What process are we going through to reach that conclusion?
Personally, I compared the lungs for symmetry, which led me to note a distinct line where the tissue in the right lung had collapsed on itself. I also noted the absence of normal lung markings beyond this line, where there should be tissue but there is instead air.
In simple terms.... the right lung is whiter in the midline, and black around the edges, with a clear distinction between these parts.
Let’s go back to our Neural Network. We’re at the training phase now.
So, we’re going to feed our algorithm! Homnomnom.
Let’s give it that image of a pneumothorax, alongside two normal chest radiographs (middle picture and bottom). The goal is to get the algorithm to accurately classify the chest radiographs we have inputted as either ‘normal’ or ‘abnormal’ depending on whether or not they demonstrate a pneumothorax.
There are two main ways we can teach this algorithm – supervised and unsupervised classification learning.
In supervised learning, we tell the neural network that the first picture is abnormal, and the second and third pictures are normal. Then we let it work out the difference, under our supervision, allowing us to steer it if it goes wrong.
Of course, if we only have three inputs, that isn’t enough for the algorithm to reach an accurate result.
You might be able to see – one of the normal chests has breasts, and another doesn't. If both ‘normal’ images had breasts, the algorithm could as easily determine that the lack of lung markings is what demonstrates a pneumothorax, as it could decide that actually, a pneumothorax is caused by not having breasts. Which, obviously, is untrue.
or is it?
....sadly I can personally confirm that having breasts does not prevent spontaneous pneumothorax, but that's another story lmao
This brings us to another big problem with AI in medicine –
If you are collecting your dataset from, say, a wealthy hospital in a suburban, majority white neighbourhood in America, then you will have those same demographics represented within that dataset. If we build a blind spot into the neural network, and it will discriminate based on that.
That’s an important thing to remember: the goal here is to create a generalisable tool for diagnosis. The algorithm will only ever be as generalisable as its dataset.
But there are plenty of huge free datasets online which have been specifically developed for training AI. What if we had hundreds of chest images, from a diverse population range, split between those which show pneumothoraxes, and those which don’t?
If we had a much larger dataset, the algorithm would be able to study the labelled ‘abnormal’ and ‘normal’ images, and come to far more accurate conclusions about what separates a pneumothorax from a normal chest in radiography. So, let’s pretend we’re the neural network, and pop in four characteristics that the algorithm might use to differentiate ‘normal’ from ‘abnormal’.
We can distinguish a pneumothorax by the appearance of a pleural edge where lung tissue has pulled away from the chest wall, and the radiolucent absence of peripheral lung markings around this area. So, let’s make those our first two nodes. Our last set of nodes are ‘do the lung markings extend to the chest wall?’ and ‘Are there no radiolucent areas?’
Now, red lines mean the answer is ‘no’ and green means the answer is ‘yes’. If the answer to the first two nodes is yes and the answer to the last two nodes is no, this is indicative of a pneumothorax – and vice versa.
Right. So, who can see the problem with this?
(image courtesy of radiopaedia)
This chest radiograph demonstrates alveolar patterns and air bronchograms within the right lung, indicative of a pneumonia. But if we fed it into our neural network...
The lung markings extend all the way to the chest wall. Therefore, this image might well be classified as ‘normal’ – a false negative.
Now we start to see why Neural Networks become deep and convolutional, and can get incredibly complex. In order to accurately differentiate a ‘normal’ from an ‘abnormal’ chest, you need a lot of nodes, and layers of nodes. This is also where unsupervised learning can come in.
Originally, Supervised Learning was used on Analytical AI, and Unsupervised Learning was used on Generative AI, allowing for more creativity in picture generation, for instance. However, more and more, Unsupervised learning is being incorporated into Analytical areas like Computer-Aided Diagnosis!
Unsupervised Learning involves feeding a neural network a large databank and giving it no information about which of the input images are ‘normal’ or ‘abnormal’. This saves massively on money and time, as no one has to go through and label the images first. It is also surprisingly very effective. The algorithm is told only to sort and classify the images into distinct categories, grouping images together and coming up with its own parameters about what separates one image from another. This sort of learning allows an algorithm to teach itself to find very small deviations from its discovered definition of ‘normal’.
BUT this is not to say that CAD is without its issues.
Let's take a look at some of the ethical and practical considerations involved in implementing this technology within clinical practice!
(Image from Agrawal et al., 2020)
Training Data does what it says on the tin – these are the initial images you feed your algorithm. What is key here is volume, variety - with especial attention paid to minimising bias – and veracity. The training data has to be ‘real’ – you cannot mislabel images or supply non-diagnostic images that obscure pathology, or your algorithm is useless.
Validation data evaluates the algorithm and improves on it. This involves tweaking the nodes within a neural network by altering the ‘weights’, or the intensity of the connection between various nodes. By altering these weights, a neural network can send an image that clearly fits our diagnostic criteria for a pneumothorax directly to the relevant output, whereas images that do not have these features must be put through another layer of nodes to rule out a different pathology.
Finally, testing data is the data that the finished algorithm will be tested on to prove its sensitivity and specificity, before any potential clinical use.
However, if algorithms require this much data to train, this introduces a lot of ethical questions.
Where does this data come from?
Is it ‘grey data’ (data of untraceable origin)? Is this good (protects anonymity) or bad (could have been acquired unethically)?
Could generative AI provide a workaround, in the form of producing synthetic radiographs? Or is it risky to train CAD algorithms on simulated data when the algorithms will then be used on real people?
If we are solely using CAD to make diagnoses, who holds legal responsibility for a misdiagnosis that costs lives? Is it the company that created the algorithm or the hospital employing it?
And finally – is it worth sinking so much time, money, and literal energy into AI – especially given concerns about the environment – when public opinion on AI in healthcare is mixed at best? This is a serious topic – we’re talking diagnoses making the difference between life and death. Do you trust a machine more than you trust a doctor? According to Rojahn et al., 2023, there is a strong public dislike of computer-aided diagnosis.
So, it's fair to ask...
why are we wasting so much time and money on something that our service users don't actually want?
Then we get to the other biggie.
There are also a variety of concerns to do with the sensitivity and specificity of Computer-Aided Diagnosis.
We’ve talked a little already about bias, and how training sets can inadvertently ‘poison’ the algorithm, so to speak, introducing dangerous elements that mimic biases and problems in society.
But do we even want completely accurate computer-aided diagnosis?
The name is computer-aided diagnosis, not computer-led diagnosis. As noted by Rajahn et al, the general public STRONGLY prefer diagnosis to be made by human professionals, and their desires should arguably be taken into account – as well as the fact that CAD algorithms tend to be incredibly expensive and highly specialised. For instance, you cannot put MRI images depicting CNS lesions through a chest reporting algorithm and expect coherent results – whereas a radiologist can be trained to diagnose across two or more specialties.
For this reason, there is an argument that rather than focusing on sensitivity and specificity, we should just focus on producing highly sensitive algorithms that will pick up on any abnormality, and output some false positives, but will produce NO false negatives.
(Sensitivity = a test's ability to identify sick people with a disease)
(Specificity = a test's ability to identify that healthy people do not have this disease)
This means we are working towards developing algorithms that OVERESTIMATE rather than UNDERESTIMATE disease prevalence. This makes CAD a useful tool for triage rather than providing its own diagnoses – if a CAD algorithm weighted towards high sensitivity and low specificity does not pick up on any abnormalities, it’s highly unlikely that there are any.
Finally, we have to question whether CAD is even all that accurate to begin with. 10 years ago, according to Lehmen et al., CAD in mammography demonstrated negligible improvements to accuracy. In 1989, Sutton noted that accuracy was under 60%. Nowadays, however, AI has been proven to exceed the abilities of radiologists when detecting cancers (that’s from Guetari et al., 2023). This suggests that there is a common upwards trajectory, and AI might become a suitable alternative to traditional radiology one day. But, due to the many potential problems with this field, that day is unlikely to be soon...
That's all, folks! Have some references~
#medblr#artificial intelligence#radiography#radiology#diagnosis#medicine#studyblr#radioactiveradley#radley irradiates people#long post
16 notes
·
View notes
Text
It's like, "why try to reduce chemistry to physics when you can use some kind of ML algorithm to predict chemical properties better than an actual calculation based on QFT or whatever could?". Well, the answer is obviously that that ML algorithm isn't as insightful to us. It's useful but it doesn't tell us what's going on. Trying to better understand the physics-chemistry boundary, and do reductionism, even if in practice a bunch of shit is infeasible to calculate, well, I gather it tells us structural stuff about chemicals, stuff that "plug and chug with an ML algorithm" can't presently give us.
"What's the point of doing linguistic theory if we already have LLMs". Well, because I don't know what's going on inside an LLM and neither do you. They're really good at doing translation tasks and shit but... do they give us insight into how language works? Do we have good reason to think that "things an LLM can learn" correspond closely to "things a human child can learn" linguistically? Does looking at a bunch of transformer weights tell us, e.g., what sorts of linguistic structures are cognitively + diachronically possible? Well, no. To do that we have to look at the actual linguistic data, come up with theories, test them against new data, repeat. Like scientists or whatever.
27 notes
·
View notes