Tumgik
#AlphaGo
macmanx · 6 months
Text
vimeo
brilliant
6 notes · View notes
nonamem9 · 2 years
Text
The number 37 and games
Tumblr media
[Comment Source]
1 note · View note
dataexpertise18 · 12 days
Text
Advanced Techniques in Deep Learning: Transfer Learning and Reinforcement Learning
Deep learning has made remarkable strides in artificial intelligence, enabling machines to perform tasks that were once thought to be the exclusive domain of human intelligence. Neural networks, which lie at the heart of deep learning, emulate the human brain’s structure and function to process large volumes of data, identify patterns, and make informed decisions.
While traditional deep learning models have proven to be highly effective, advanced techniques like transfer learning and reinforcement learning are setting new benchmarks, expanding the potential of AI even further. This article explores these cutting-edge techniques, shedding light on their functionalities, advantages, practical applications, and real-world case studies.
Understanding Transfer Learning
Transfer learning is a powerful machine learning method where a model trained on one problem is repurposed to solve a different, but related, problem. This technique leverages knowledge from a previously solved task to tackle new challenges, much like how humans apply past experiences to new situations. Here's a breakdown of how transfer learning works and its benefits:
Tumblr media
Use of Pre-Trained Models: In essence, transfer learning involves using pre-trained models like VGG, ResNet, or BERT. These models are initially trained on large datasets such as ImageNet for visual tasks or extensive text corpora for natural language processing (NLP). This pre-training equips them with a broad understanding of patterns and features.
Fine-Tuning for Specific Tasks: Once a pre-trained model is selected, it undergoes a fine-tuning process. This typically involves modifying the model's architecture:
Freezing Layers: Some layers of the model are frozen to retain the learned features.
Adapting or Replacing Layers: Other layers are adapted or replaced to tailor the model to the specific needs of a new, often smaller, dataset. This customization ensures that the model is optimized for the specific task at hand.
Reduced Training Time and Resources: One of the major benefits of transfer learning is that it significantly reduces the time and computational power required to train a new model. Since the model has already learned essential features from the initial training, it requires less data and fewer resources to fine-tune for new tasks.
Enhanced Performance: By reusing existing models, transfer learning brings valuable pre-learned features and insights, which can lead to higher accuracy in new tasks. This pre-existing knowledge provides a solid foundation, allowing the model to perform better than models trained from scratch.
Effectiveness with Limited Data: Transfer learning is particularly beneficial when labeled data is scarce. This is a common scenario in specialized fields such as medical imaging, where collecting and labeling data can be costly and time-consuming. By leveraging a pre-trained model, researchers can achieve high performance even with a limited dataset.
Transfer learning’s ability to save time, resources, and enhance performance makes it a popular choice across various domains, from image classification to natural language processing and healthcare diagnostics.
Practical Applications of Transfer Learning
Transfer learning has demonstrated its effectiveness across various domains by adapting pre-trained models to solve specific tasks with high accuracy. Below are some key applications:
Image Classification: One of the most common uses of transfer learning is in image classification. For instance, Google’s Inception model, which was pre-trained on the ImageNet dataset, has been successfully adapted for various image recognition tasks. Researchers have fine-tuned the Inception model to detect plant diseases, classify wildlife species, and identify objects in satellite imagery. These applications have achieved high accuracy, even with relatively small amounts of training data.
Natural Language Processing (NLP): Transfer learning has revolutionized how models handle language-related tasks. A prominent example is BERT (Bidirectional Encoder Representations from Transformers), a model pre-trained on vast amounts of text data. BERT has been fine-tuned for a variety of NLP tasks, such as:
Sentiment Analysis: Understanding and categorizing emotions in text, such as product reviews or social media posts.
Question Answering: Powering systems that can provide accurate answers to user queries.
Language Translation: Improving the quality of automated translations between different languages. Companies have also utilized BERT to develop customer service bots capable of understanding and responding to inquiries, which significantly enhances user experience and operational efficiency.
Healthcare: The healthcare industry has seen significant benefits from transfer learning, particularly in medical imaging. Pre-trained models have been fine-tuned to analyze images like X-rays and MRIs, allowing for early detection of diseases. Examples include:
Pneumonia Detection: Models fine-tuned on medical image datasets to identify signs of pneumonia from chest X-rays.
Brain Tumor Identification: Using pre-trained models to detect abnormalities in MRI scans.
Cancer Detection: Developing models that can accurately identify cancerous lesions in radiology scans, thereby assisting doctors in making timely diagnoses and improving patient outcomes.
Performance Improvements: Studies have shown that transfer learning can significantly enhance model performance. According to research published in the journal Nature, using transfer learning reduced error rates in image classification tasks by 40% compared to models trained from scratch. In the field of NLP, a survey by Google AI reported that transfer learning improved accuracy metrics by up to 10% over traditional deep learning methods.
These examples illustrate how transfer learning not only saves time and resources but also drives significant improvements in accuracy and efficiency across various fields, from agriculture and wildlife conservation to customer service and healthcare diagnostics.
Exploring Reinforcement Learning
Reinforcement learning (RL) offers a unique approach compared to other machine learning techniques. Unlike supervised learning, which relies on labeled data, RL focuses on training an agent to make decisions by interacting with its environment and receiving feedback in the form of rewards or penalties. This trial-and-error method enables the agent to learn optimal strategies that maximize cumulative rewards over time.
How Reinforcement Learning Works:
Agent and Environment Interaction: In RL, an agent (the decision-maker) perceives its environment, makes decisions, and performs actions that alter its state. The environment then provides feedback, which could be a reward (positive feedback) or a penalty (negative feedback), based on the action taken.
Key Components of RL:
Agent: The learner or decision-maker that interacts with the environment.
Environment: The system or scenario within which the agent operates and makes decisions.
Actions: The set of possible moves or decisions the agent can make.
States: Different configurations or situations that the environment can be in.
Rewards: Feedback received by the agent after taking an action, which is used to evaluate the success of that action.
Policy: The strategy or set of rules that define the actions the agent should take based on the current state.
Adaptive Learning and Real-Time Decision-Making:
The adaptive nature of reinforcement learning makes it particularly effective in dynamic environments where conditions are constantly changing. This adaptability allows systems to learn autonomously, without requiring explicit instructions, making RL suitable for real-time applications where quick, autonomous decision-making is crucial. Examples include robotics, where robots learn to navigate different terrains, and self-driving cars that must respond to unpredictable road conditions.
Statistics and Real-World Impact:
Success in Gaming: One of the most prominent examples of RL’s success is in the field of gaming. DeepMind’s AlphaGo, powered by reinforcement learning, famously defeated the world champion in the complex game of Go. This achievement demonstrated RL's capability for strategic thinking and complex decision-making. AlphaGo's RL-based approach achieved a win rate of 99.8% against other AI systems and professional human players.
Robotic Efficiency: Research by OpenAI has shown that using reinforcement learning can improve the efficiency of robotic grasping tasks by 30%. This increase in efficiency leads to more reliable and faster robotic operations, highlighting RL’s potential in industrial automation and logistics.
Autonomous Driving: In the automotive industry, reinforcement learning is used to train autonomous vehicles for tasks such as lane changing, obstacle avoidance, and route optimization. By continually learning from the environment, RL helps improve the safety and efficiency of self-driving cars. For instance, companies like Waymo and Tesla use RL techniques to enhance their vehicle's decision-making capabilities in real-time driving scenarios.
Reinforcement learning's ability to adapt and learn from interactions makes it a powerful tool in developing intelligent systems that can operate in complex and unpredictable environments. Its applications across various fields, from gaming to robotics and autonomous vehicles, demonstrate its potential to revolutionize how machines learn and make decisions.
Practical Applications of Reinforcement Learning
One of the most prominent applications of reinforcement learning is in robotics. RL is employed to train robots for tasks such as walking, grasping objects, and navigating complex environments. Companies like Boston Dynamics use reinforcement learning to develop robots that can adapt to varying terrains and obstacles, enhancing their functionality and reliability in real-world scenarios.
Reinforcement learning has also made headlines in the gaming industry. DeepMind’s AlphaGo, powered by reinforcement learning, famously defeated a world champion in the ancient board game Go, demonstrating RL's capacity for strategic thinking and complex decision-making. The success of AlphaGo, which achieved a 99.8% win rate against other AI systems and professional human players, showcased the potential of RL in mastering sophisticated tasks.
In the automotive industry, reinforcement learning is used to train self-driving cars to make real-time decisions. Autonomous vehicles rely on RL to handle tasks such as lane changing, obstacle avoidance, and route optimization. Companies like Tesla and Waymo utilize reinforcement learning to improve the safety and efficiency of their autonomous driving systems, pushing the boundaries of what AI can achieve in real-world driving conditions.
Comparing Transfer Learning and Reinforcement Learning
Tumblr media
While both transfer learning and reinforcement learning are advanced techniques that enhance deep learning capabilities, they serve different purposes and excel in different scenarios. Transfer learning is ideal for tasks where a pre-trained model can be adapted to a new but related problem, making it highly effective in domains like image and language processing. It is less resource-intensive and quicker to implement compared to reinforcement learning.
Reinforcement learning, on the other hand, is better suited for scenarios requiring real-time decision-making and adaptation to dynamic environments. Its complexity and need for extensive simulations make it more resource-demanding, but its potential to achieve breakthroughs in fields like robotics, gaming, and autonomous systems is unparalleled.
Conclusion
Transfer learning and reinforcement learning represent significant advancements in the field of deep learning, each offering unique benefits that can be harnessed to solve complex problems. By repurposing existing knowledge, transfer learning allows for efficient and effective solutions, especially when data is scarce. Reinforcement learning, with its ability to learn and adapt through interaction with the environment, opens up new possibilities in areas requiring autonomous decision-making and adaptability.
As AI continues to evolve, these techniques will play a crucial role in developing intelligent, adaptable, and efficient systems. Staying informed about these advanced methodologies and exploring their applications will be key to leveraging the full potential of AI in various industries. Whether it's enhancing healthcare diagnostics, enabling self-driving cars, or creating intelligent customer service bots, transfer learning and reinforcement learning are paving the way for a smarter, more automated future.
1 note · View note
Text
In the rapidly progressing technological world of today, artificial intelligence (AI) has become a game-changing force permeating diverse sectors. From healthcare to finance, AI applications have revolutionized traditional practices, driving efficiency, innovation, and competitive advantage. Through advanced algorithms and machine learning techniques, AI enables businesses to analyze vast datasets, extract valuable insights, and make data-driven decisions with unprecedented accuracy and speed.
Among the most remarkable achievements in AI’s journey is the saga of AlphaGo. Developed by DeepMind, AlphaGo represents a paradigm shift in the realm of strategic gaming. In 2016, it captivated the world by defeating world champion Lee Sedol in the ancient game of Go, a feat once considered beyond the reach of machines. The key to AlphaGo‘s success lies in its groundbreaking approach to gameplay, leveraging sophisticated deep learning algorithms to analyze millions of potential moves and devise strategies with unmatched precision.
The victory of AlphaGo underscored the transformative potential of artificial intelligence in gaming, transcending the boundaries of human expertise and reshaping our understanding of strategic thinking. Its triumph serves as a testament to the relentless pursuit of innovation in AI research and development.
Beyond the realm of gaming, AI continues to revolutionize various aspects of business operations, including content creation, optimization, and SEO services. Through advanced algorithms and predictive analytics, AI enables businesses to generate high-quality content, optimize their online presence, and improve search engine rankings. By harnessing the power of AI, companies can enhance their digital marketing strategies, drive organic traffic, and stay ahead of the competition.
In this era of AI-driven innovation, we at Reves BI are at the forefront of leveraging cutting-edge technology to offer comprehensive SEO services and content optimization solutions. With a focus on harnessing the power of AI and data science consulting, Reves BI empowers businesses to maximize their online potential, drive growth, and achieve sustainable success in the digital landscape.
Contact Us today for expert consultation and transformative solutions in data science.
#artificial intelligence #AlphaGo #data science consulting #deep learning #deepmind #artificial intelligence in gaming #datasets
0 notes
abhijitdivate1 · 4 months
Text
1 note · View note
zzedar2 · 4 months
Text
Was reading about AlphaGo's match against Fan Hui (the European champion). This was a much less powerful version than the one that beat Lee Sedol and apparently there was debate among professional go players in China about whether (based on those games) it was as good as a professional player. Really shows the gap between the Big Three go-playing countries and the west that a computer who beat the champion of an entire continent 5-0 was arguably not good enough to even be considered a professional player in Asia.
1 note · View note
1day1movie · 7 months
Text
Tumblr media
AlphaGo (2017) Greg Kohs.
0 notes
sifytech · 9 months
Text
All You Need to Know about Gemini, Google's Response to ChatGPT
Tumblr media
As Google releases its new generative AI model called Gemini, Adarsh takes you through everything we know about it so far. Read More. https://www.sify.com/ai-analytics/all-you-need-to-know-about-gemini-googles-response-to-chatgpt/
0 notes
drnic1 · 11 months
Text
The Good, The Hype, and The Doctor's Perspective
AI in Healthcare This week I am talking to Rob Brisk, MBChb PhD Chief Scientific Officer for Eolas Medical (@EolasMedical). Rob has a fascinating background with experience in both healthcare and machine learning and artificial intelligence. Robe shares his journey from being a physician to venturing into the world of AI and emphasizes the importance of clinicians’ involvement in AI…
Tumblr media
View On WordPress
0 notes
bloginnovazione · 1 year
Link
0 notes
scienza-magia · 1 year
Text
Cloneremo il cervello umano in intelligenze organoidi
Cervello umano: quanti gigabyte o terabyte può memorizzare. Quanti dati può conservare il cervello umano e quali sono i rapporti con l'intelligenza artificiale. L'ulteriore evoluzione rappresentata dall'intelligenza organoide. Cos'è e come funziona. Le capacità di archiviazione di dati da parte del cervello umano sono un argomento complesso e ancora oggetto di ricerca e dibattito scientifico. Non può esistere una misurazione precisa delle capacità di memorizzazione in termini di gigabyte o terabyte perché il cervello umano funziona in modo molto diverso rispetto ad esempio alle memorie digitali. Il cervello umano è un organo incredibilmente complesso composto da miliardi di neuroni interconnessi. Le sue abilità poggiano sulla formazione e il rafforzamento di connessioni sinaptiche tra i neuroni, attraverso processi come l’apprendimento e l’esperienza. Quanti terabyte di dati è in grado di conservare il cervello umano? Tanti esperti, tuttavia, si sono a più riprese prodigati nello sforzo di provare a equiparare in qualche modo il cervello umano con il comportamento dello storage digitale. Uno dei primi esperti ad aver condotto delle stime in tal senso, è stato il professor Robert Birge, chimico pluripremiato che si è ampiamente occupato delle relazioni tra biologia ed elettronica. Già nel lontano 1996,  presso la Syracuse University, Birge ha fatto un calcolo approssimativo per poi raffinarlo successivamente, all’Università del Connecticut. Birge ha associato un neurone a un singolo bit (in un altro articolo parliamo di codice binario, bit e byte): una semplice moltiplicazione restituisce un valore, in termini di capacità di storage complessiva del cervello umano, pari a circa 5 TB (Terabyte). In realtà il docente ha pubblicato una serie di osservazioni per stimare che un valore più corretto potrebbe aggirarsi intorno ai 30-40 TB.
Tumblr media
Uno studio successivo condotto dai ricercatori del Salk Institute, guidati da Terry Sejnowski, ha stimato che ogni connessione nel cervello potrebbe immagazzinare 10 volte di più di quanto conosciuto fino a quel momento. Tanto che si potrebbe addirittura parlare di Petabyte come misura per valutare il volume di dati che il cervello umano è in grado di stivare. Ne è convinto il professor Thomas Hartung della Johns Hopkins University, che parla di capacità di memorizzazione a livello cerebrale dell’ordine dei 2,5 Petabyte, equivalenti a 2.500 TB (ne parliamo più avanti). Altre indagini simili collocano la capacità del cervello umano tra 50 e 200 TB. Sebbene si tratti di valutazioni puramente teoriche, difficilmente dimostrabili con dati e riscontri fattuali, i valori in gioco ci aiutano a immaginare qual è la mole di dati che ciascuno di noi può potenzialmente conservare. Soprattutto se il cervello è addestrato a farlo. L’intelligenza artificiale sfida le abilità di memorizzazione del cervello umano Se ai tempi dei floppy disk avevamo a disposizione appena 1,44 MB di capacità di storage e in questi giorni Seagate ha presentato i primi hard disk da 40 Terabyte di capienza, il progresso tecnologico che ha segnato il segmento dello storage è stato davvero vorticoso. Con innovazioni continue sul versante tecnologico. E se si pensa che, come si può fare anche a casa o in ufficio con lo schema RAID ad esempio, la capacità delle singole unità di memorizzazione può essere combinata per creare soluzioni di storage scalabili e con un capienza virtualmente illimitata, il balzo in avanti compiuto negli ultimi anni appare ancora più incredibile. L’avvento delle moderne intelligenze artificiali rappresenta una vera sfida rispetto alle abilità di memorizzazione del cervello umano. Modelli addestrati su miliardi di parametri sono in grado di cogliere in maniera sempre più puntuale l'”essenza” delle interconnessioni tra termini e, addirittura, concetti. Addestramento, parametri utilizzati e abilità di memorizzazione Una maggiore quantità di dati di addestramento risulta particolarmente vantaggiosa per l’intelligenza artificiale. Più dati sono disponibili, più il modello può apprendere da una vasta gamma di esempi e sviluppare una “comprensione” più approfondita dei pattern e delle relazioni all’interno dei dati. La qualità dei dati di addestramento gioca però un ruolo essenziale: informazioni rappresentative e coerenti assicurano un apprendimento migliore del modello e contengono eventuali distorsioni. Mentre il cervello umano ha una capacità di memorizzazione limitata, i sistemi di intelligenza artificiale possono utilizzare dispositivi di memorizzazione digitale per archiviare enormi quantità di dati. I server delle più grandi aziende tecnologiche possono immagazzinare petabyte o addirittura exabyte di dati. Tuttavia, nonostante le capacità di memorizzazione e di elaborazione dati delle intelligenze artificiali, il cervello umano gode di vantaggi significativi in aspetti come la creatività, l’elaborazione di informazioni complesse e l’adattamento ai nuovi contesti. Il cervello umano è in grado di sviluppare connessioni interdisciplinari, apprendere da esperienze emotive e applicare la conoscenza in modi molto più ampi rispetto alle attuali intelligenze artificiali. Importanti novità dovrebbero arrivare con il modello generativo GPT-5 di OpenAI che dovrebbe essere capace di generare e comprendere più modalità di informazione. Un approccio multimodale nell’intelligenza artificiale consente di integrare e combinare le informazioni provenienti da diverse fonti. Si pensi ai diversi canali sensoriali e tipi di dati (testo, immagini, audio, video, informazioni posizionali,…). Il passo successivo: l’intelligenza organoide In questa prima parte del 2023 si è anche iniziato a parlare con maggior convinzione del concetto di intelligenza organoide. Abbiamo recentemente pubblicato una articolo che descrive le potenzialità della bioinformatica, che potrebbe rivelarsi in futuro come una sorta di intelligenza artificiale di nuova generazione. L’intelligenza artificiale ha tratto per lungo tempo ampia ispirazione dai meccanismi di funzionamento del cervello umano. Un approccio che si è rivelato di grande successo. Ma se invece di cercare di rendere le intelligenze artificiali più simili al cervello, andassimo direttamente alla fonte? Lo ha spiegato Thomas Hartung rivelando che team di scienziati impegnati in più discipline stanno lavorando per creare biocomputer rivoluzionari in cui colture tridimensionali di cellule cerebrali, chiamate “organoidi cerebrali”, fungono da hardware biologico. Cosa sono e come funzionano gli organoidi cerebrali Gli organoidi cerebrali condividono aspetti chiave della funzione e della struttura del cervello come i neuroni e altre cellule cerebrali essenziali per le funzioni cognitive come l’apprendimento e la memoria. Trattandosi di strutture 3D, la densità cellulare della coltura risulta aumentata di 1.000 volte rispetto a una soluzione planare: è possibile formare molte più connessioni neurali. “Mentre i computer basati su silicio sono certamente migliori con i numeri, il cervello è molto più efficace nell’apprendimento“, ha spiegato Hartung. “Il cervello non solo apprende meglio ma è anche più efficienti dal punto di vista energetico“. L’esperto cita ad esempio AlphaGo, l’intelligenza artificiale Google DeepMind che – addestrata su centinaia di migliaia di partite – è stata in grado di battere tutti gli umani, anche i campioni dei vari giochi. Ecco, la quantità di energia spesa per addestrare AlphaGo è superiore a quella necessaria per sostenere un adulto attivo per almeno un decennio. “I cervelli hanno anche una straordinaria capacità di immagazzinare informazioni, stimata a 2.500 TB”, ha aggiunto Hartung. “Stiamo raggiungendo i limiti fisici dei computer al silicio perché non possiamo racchiudere più transistor in un minuscolo chip. Ma il cervello è cablato in modo completamente diverso. Ha circa 100 miliardi di neuroni collegati attraverso oltre 10 15 punti di connessione. È un’enorme differenza di potenza rispetto alla nostra tecnologia attuale”. La struttura dei biocomputer con intelligenza organoide L’obiettivo sarà utilizzare gli organoidi cerebrali cresciuti in laboratorio per farli diventare un’intelligenza organoide vera e propria. I test stanno proseguendo in queste settimane: Hartung ha spiegato che gli attuali organoidi cerebrali sono troppo piccoli. Ciascuno contiene circa 50.000 celle; per arrivare a un’intelligenza organoide ne servono 10 milioni. Parallelamente, gli autori stanno anche sviluppando tecnologie per comunicare con gli organoidi: in altre parole, per inviare loro informazioni e leggere quanto prodotto. Gli autori dello studio intendono adattare strumenti provenienti da varie discipline scientifiche, come la bioingegneria e l’apprendimento automatico, nonché progettare nuovi dispositivi di stimolazione e registrazione. “Abbiamo sviluppato un dispositivo di interfaccia cervello-computer che è una sorta di tappo EEG per organoidi, che abbiamo presentato in un articolo pubblicato lo scorso agosto. È un guscio flessibile che è densamente ricoperto da minuscoli elettrodi che possono sia raccogliere segnali dall’organoide, sia trasmettergli segnali”, ha affermato Hartung. Gli autori prevedono che alla fine l’OI integrerà un’ampia gamma di strumenti di stimolazione e registrazione. Questi organizzeranno le interazioni attraverso reti di organoidi interconnessi che implementano calcoli più complessi. Non è fantascienza. Sebbene l’intelligenza organoide sia ancora agli inizi, uno studio pubblicato di recente da uno dei coautori dell’articolo pubblicato su Science, Brett Kagan, Chief Scientific Officer di Cortical Labs, ha offerto una prima interessante panoramica. Il team di Kagan ha dimostrato come una normale coltura di cellule cerebrali piatte possa imparare a giocare al videogioco Pong. “Da qui in poi, si tratta solo di costruire la comunità, gli strumenti e le tecnologie per realizzare il pieno potenziale dell’intelligenza organoide“, ha concluso il professor Hartung. Read the full article
1 note · View note
gogoigo · 2 years
Text
instagram
0 notes
meyer-sensei · 2 years
Quote
In March 2016, AlphaGo beat the world's best player in a five-game series. …,one particular move by AlphaGo led to exclamations from commentators and was described as ‘beautiful’ by a past Go champion—precisely because it caught them off-balance. Contrary to widespread belief, machines are now capable of generating novel outcomes, entirely beyond the contemplation of their original human designers.
The Future of the Professions. How Technology will transform the Work of Human Experts. By Richard Susskind and Daniel Susskind, 2022.
1 note · View note
sin-scape · 2 years
Text
Tumblr media Tumblr media Tumblr media
youtube
AlphaGo - The Movie.
With more board configurations than there are atoms in the universe, the ancient Chinese game of Go has long been considered a grand challenge for artificial intelligence.
On March 9, 2016, the worlds of Go and artificial intelligence collided in South Korea for an extraordinary best-of-five-game competition, coined The DeepMind Challenge Match. Hundreds of millions of people around the world watched as a legendary Go master took on an unproven AI challenger for the first time in history.
1 note · View note
kneels-bohr · 27 days
Text
raw-dogged an ~18 hour journey, no phone, no laptop, no notebook, no music, no object of any kind and no company to distract me. i think i might be losing it just a little bit. for the most part you can just create imaginary conversation partners and it's surprisingly fine although idk what the effects of doing this to yourself long-term would be. like alphago but for your personality
52 notes · View notes
14dyh · 5 months
Text
list of my saved youtube videos that Hange would watch:
Tumblr media
A/N: someone watch this nerdy stuff with me pls, i'll go insane. need a hange for myself :') currently watching these videos to feed my nerdy hange delusions :D [i marked my faves with an (*) hehe]
short videos (10-30 minutes)
The Nightmares of Eduardo Valdés-Hevia
The Creatures of Codex Inversus
Nietzsche's Most Dangerous Idea | The Übermensch
Don't fear intelligent machines. Work with them | Garry Kasparov
* Decomposing Bodies to Solve Cold Case Murders
Glow-in-the-dark sharks and other stunning sea creatures | David Gruber
* You Will Never Do Anything Remarkable
* The Cognitive Tradeoff Hypothesis
* Inspiring the next generation of female engineers | Debbie Sterling | TEDxPSU
The Disturbing Paintings of Hieronymus Bosch
Roko's Basilisk: The Most Terrifying Thought Experiment
The 5 Most Dangerous Chemicals on Earth
Depth Charge Explosion Soaks Dr. Tatiana In Water
Monster Surgeon: The Lost Work of Dr. Spencer Black
The Biology of Giants Explained | The Science of Giants
I Made an Ecosystem With a Mini Pond Inside, Here’s How!
CSI Special Insects Unit: Forensic Entomology
not-so-short but under 1 hr (31-59 minutes)
* The unpredictable tale of The Dead Man's Story by J. Hain Friswell
Planets: The Search for a New World | Space Science | Episode 4 | Free Documentary
* Let's Visit the World of the Future [tw: might be a bit disturbing, it's an interesting scifi horror though]
The Mystery of Matter: “INTO THE ATOM” (Documentary)
* Australia's Deadliest Coast (Full Episode) | When Sharks Attack: There Will Be Blood
* How Leonardo da Vinci Changed the World
long videos (over 1 hr)
Demystifying the Higgs Boson with Leonard Susskind
* The complete FUN TO IMAGINE with Richard Feynman
The Brain That Wouldn't Die (1962) Colorized | Sci-Fi Horror | Cult Classic | Full Movie
* AlphaGo - The Movie | Full award-winning documentary
Particle Fever - Documentary
* Exploring The Underwater World | 4K UHD | Blue Planet II | BBC Earth
What was the Earth like in the Age of Giant Prehistoric Creatures? | Documentary Earth History
117 notes · View notes