#AlphaGo
Explore tagged Tumblr posts
Text
Was reading about AlphaGo's match against Fan Hui (the European champion). This was a much less powerful version than the one that beat Lee Sedol and apparently there was debate among professional go players in China about whether (based on those games) it was as good as a professional player. Really shows the gap between the Big Three go-playing countries and the west that a computer who beat the champion of an entire continent 5-0 was arguably not good enough to even be considered a professional player in Asia.
5 notes
·
View notes
Text
Professor de Stanford afirma que já superamos a inteligência artificial geral
A inteligência artificial geral (AGI), que se refere a máquinas com capacidade de realizar qualquer tarefa humana, é frequentemente considerada o objetivo mais ambicioso da pesquisa em tecnologia. Apesar das divergências entre especialistas sobre a possibilidade de alcançá-la, Michal Kosinski, professor de comportamento organizacional na Graduate School of Business da Universidade de Stanford, está convencido de que já ultrapassamos essa barreira. Em uma apresentação realizada no evento Brazil at Silicon Valley, que reuniu cerca de 600 participantes no Google Event Center em Sunnyvale, Califórnia, Kosinski afirmou: “Nós já ultrapassamos a inteligência artificial geral”.(...)
Leia a noticia completa no link abaixo:
https://www.inspirednews.com.br/professor-de-stanford-afirma-que-ja-superamos-a-inteligencia-artificial-geral

#michalkosinski#inteligenciaartificial#stanford#brazilatsiliconvalley#gpt-4#deepblue#alphago#cambridgeanalytica#reconhecimentofacial#eticatecnologica#inovacao
0 notes
Text
https://www.instagram.com/share/reel/BASwD9rSJd
0 notes
Text
The evolution of AI: From AlphaGo to AI agents, physical AI, and beyond
Experience the latest in AI innovation. Join Microsoft at the NVIDIA GTC AI Conference. Learn more and register. The critical moment of ChatGPT The release of ChatGPT by OpenAI in November 2022 marked another significant milestone in the evolution of AI. ChatGPT, a large language model capable of generating human-like text, demonstrated the potential of AI to understand and generate natural…
0 notes
Text
Pequeños y grandes pasos hacia el imperio de la inteligencia artificial
Fuente: Open Tech Traducción de la infografía: 1943 – McCullock y Pitts publican un artículo titulado Un cálculo lógico de ideas inmanentes en la actividad nerviosa, en el que proponen las bases para las redes neuronales. 1950 – Turing publica Computing Machinery and Intelligence, proponiendo el Test de Turing como forma de medir la capacidad de una máquina. 1951 – Marvin Minsky y Dean…
#ajedrez#AlphaFold2#AlphaGo#AlphaZero#aprendizaje automático#artículo#artistas#aspirador#Blake Lemoine#Conferencia de Dartmouth#copyright#Dean Edmonds#Deep Blue#DeepFace#DeepMind#DeviantArt#ELIZA#Facebook#gatos#Genuine Impact#Go#Google#GPS#GPT-3#gráfico#Hinton#IA#IBM#infografía#inteligencia artificial
1 note
·
View note
Text
Advanced Techniques in Deep Learning: Transfer Learning and Reinforcement Learning
Deep learning has made remarkable strides in artificial intelligence, enabling machines to perform tasks that were once thought to be the exclusive domain of human intelligence. Neural networks, which lie at the heart of deep learning, emulate the human brain’s structure and function to process large volumes of data, identify patterns, and make informed decisions.
While traditional deep learning models have proven to be highly effective, advanced techniques like transfer learning and reinforcement learning are setting new benchmarks, expanding the potential of AI even further. This article explores these cutting-edge techniques, shedding light on their functionalities, advantages, practical applications, and real-world case studies.
Understanding Transfer Learning
Transfer learning is a powerful machine learning method where a model trained on one problem is repurposed to solve a different, but related, problem. This technique leverages knowledge from a previously solved task to tackle new challenges, much like how humans apply past experiences to new situations. Here's a breakdown of how transfer learning works and its benefits:

Use of Pre-Trained Models: In essence, transfer learning involves using pre-trained models like VGG, ResNet, or BERT. These models are initially trained on large datasets such as ImageNet for visual tasks or extensive text corpora for natural language processing (NLP). This pre-training equips them with a broad understanding of patterns and features.
Fine-Tuning for Specific Tasks: Once a pre-trained model is selected, it undergoes a fine-tuning process. This typically involves modifying the model's architecture:
Freezing Layers: Some layers of the model are frozen to retain the learned features.
Adapting or Replacing Layers: Other layers are adapted or replaced to tailor the model to the specific needs of a new, often smaller, dataset. This customization ensures that the model is optimized for the specific task at hand.
Reduced Training Time and Resources: One of the major benefits of transfer learning is that it significantly reduces the time and computational power required to train a new model. Since the model has already learned essential features from the initial training, it requires less data and fewer resources to fine-tune for new tasks.
Enhanced Performance: By reusing existing models, transfer learning brings valuable pre-learned features and insights, which can lead to higher accuracy in new tasks. This pre-existing knowledge provides a solid foundation, allowing the model to perform better than models trained from scratch.
Effectiveness with Limited Data: Transfer learning is particularly beneficial when labeled data is scarce. This is a common scenario in specialized fields such as medical imaging, where collecting and labeling data can be costly and time-consuming. By leveraging a pre-trained model, researchers can achieve high performance even with a limited dataset.
Transfer learning’s ability to save time, resources, and enhance performance makes it a popular choice across various domains, from image classification to natural language processing and healthcare diagnostics.
Practical Applications of Transfer Learning
Transfer learning has demonstrated its effectiveness across various domains by adapting pre-trained models to solve specific tasks with high accuracy. Below are some key applications:
Image Classification: One of the most common uses of transfer learning is in image classification. For instance, Google’s Inception model, which was pre-trained on the ImageNet dataset, has been successfully adapted for various image recognition tasks. Researchers have fine-tuned the Inception model to detect plant diseases, classify wildlife species, and identify objects in satellite imagery. These applications have achieved high accuracy, even with relatively small amounts of training data.
Natural Language Processing (NLP): Transfer learning has revolutionized how models handle language-related tasks. A prominent example is BERT (Bidirectional Encoder Representations from Transformers), a model pre-trained on vast amounts of text data. BERT has been fine-tuned for a variety of NLP tasks, such as:
Sentiment Analysis: Understanding and categorizing emotions in text, such as product reviews or social media posts.
Question Answering: Powering systems that can provide accurate answers to user queries.
Language Translation: Improving the quality of automated translations between different languages. Companies have also utilized BERT to develop customer service bots capable of understanding and responding to inquiries, which significantly enhances user experience and operational efficiency.
Healthcare: The healthcare industry has seen significant benefits from transfer learning, particularly in medical imaging. Pre-trained models have been fine-tuned to analyze images like X-rays and MRIs, allowing for early detection of diseases. Examples include:
Pneumonia Detection: Models fine-tuned on medical image datasets to identify signs of pneumonia from chest X-rays.
Brain Tumor Identification: Using pre-trained models to detect abnormalities in MRI scans.
Cancer Detection: Developing models that can accurately identify cancerous lesions in radiology scans, thereby assisting doctors in making timely diagnoses and improving patient outcomes.
Performance Improvements: Studies have shown that transfer learning can significantly enhance model performance. According to research published in the journal Nature, using transfer learning reduced error rates in image classification tasks by 40% compared to models trained from scratch. In the field of NLP, a survey by Google AI reported that transfer learning improved accuracy metrics by up to 10% over traditional deep learning methods.
These examples illustrate how transfer learning not only saves time and resources but also drives significant improvements in accuracy and efficiency across various fields, from agriculture and wildlife conservation to customer service and healthcare diagnostics.
Exploring Reinforcement Learning
Reinforcement learning (RL) offers a unique approach compared to other machine learning techniques. Unlike supervised learning, which relies on labeled data, RL focuses on training an agent to make decisions by interacting with its environment and receiving feedback in the form of rewards or penalties. This trial-and-error method enables the agent to learn optimal strategies that maximize cumulative rewards over time.
How Reinforcement Learning Works:
Agent and Environment Interaction: In RL, an agent (the decision-maker) perceives its environment, makes decisions, and performs actions that alter its state. The environment then provides feedback, which could be a reward (positive feedback) or a penalty (negative feedback), based on the action taken.
Key Components of RL:
Agent: The learner or decision-maker that interacts with the environment.
Environment: The system or scenario within which the agent operates and makes decisions.
Actions: The set of possible moves or decisions the agent can make.
States: Different configurations or situations that the environment can be in.
Rewards: Feedback received by the agent after taking an action, which is used to evaluate the success of that action.
Policy: The strategy or set of rules that define the actions the agent should take based on the current state.
Adaptive Learning and Real-Time Decision-Making:
The adaptive nature of reinforcement learning makes it particularly effective in dynamic environments where conditions are constantly changing. This adaptability allows systems to learn autonomously, without requiring explicit instructions, making RL suitable for real-time applications where quick, autonomous decision-making is crucial. Examples include robotics, where robots learn to navigate different terrains, and self-driving cars that must respond to unpredictable road conditions.
Statistics and Real-World Impact:
Success in Gaming: One of the most prominent examples of RL’s success is in the field of gaming. DeepMind’s AlphaGo, powered by reinforcement learning, famously defeated the world champion in the complex game of Go. This achievement demonstrated RL's capability for strategic thinking and complex decision-making. AlphaGo's RL-based approach achieved a win rate of 99.8% against other AI systems and professional human players.
Robotic Efficiency: Research by OpenAI has shown that using reinforcement learning can improve the efficiency of robotic grasping tasks by 30%. This increase in efficiency leads to more reliable and faster robotic operations, highlighting RL’s potential in industrial automation and logistics.
Autonomous Driving: In the automotive industry, reinforcement learning is used to train autonomous vehicles for tasks such as lane changing, obstacle avoidance, and route optimization. By continually learning from the environment, RL helps improve the safety and efficiency of self-driving cars. For instance, companies like Waymo and Tesla use RL techniques to enhance their vehicle's decision-making capabilities in real-time driving scenarios.
Reinforcement learning's ability to adapt and learn from interactions makes it a powerful tool in developing intelligent systems that can operate in complex and unpredictable environments. Its applications across various fields, from gaming to robotics and autonomous vehicles, demonstrate its potential to revolutionize how machines learn and make decisions.
Practical Applications of Reinforcement Learning
One of the most prominent applications of reinforcement learning is in robotics. RL is employed to train robots for tasks such as walking, grasping objects, and navigating complex environments. Companies like Boston Dynamics use reinforcement learning to develop robots that can adapt to varying terrains and obstacles, enhancing their functionality and reliability in real-world scenarios.
Reinforcement learning has also made headlines in the gaming industry. DeepMind’s AlphaGo, powered by reinforcement learning, famously defeated a world champion in the ancient board game Go, demonstrating RL's capacity for strategic thinking and complex decision-making. The success of AlphaGo, which achieved a 99.8% win rate against other AI systems and professional human players, showcased the potential of RL in mastering sophisticated tasks.
In the automotive industry, reinforcement learning is used to train self-driving cars to make real-time decisions. Autonomous vehicles rely on RL to handle tasks such as lane changing, obstacle avoidance, and route optimization. Companies like Tesla and Waymo utilize reinforcement learning to improve the safety and efficiency of their autonomous driving systems, pushing the boundaries of what AI can achieve in real-world driving conditions.
Comparing Transfer Learning and Reinforcement Learning

While both transfer learning and reinforcement learning are advanced techniques that enhance deep learning capabilities, they serve different purposes and excel in different scenarios. Transfer learning is ideal for tasks where a pre-trained model can be adapted to a new but related problem, making it highly effective in domains like image and language processing. It is less resource-intensive and quicker to implement compared to reinforcement learning.
Reinforcement learning, on the other hand, is better suited for scenarios requiring real-time decision-making and adaptation to dynamic environments. Its complexity and need for extensive simulations make it more resource-demanding, but its potential to achieve breakthroughs in fields like robotics, gaming, and autonomous systems is unparalleled.
Conclusion
Transfer learning and reinforcement learning represent significant advancements in the field of deep learning, each offering unique benefits that can be harnessed to solve complex problems. By repurposing existing knowledge, transfer learning allows for efficient and effective solutions, especially when data is scarce. Reinforcement learning, with its ability to learn and adapt through interaction with the environment, opens up new possibilities in areas requiring autonomous decision-making and adaptability.
As AI continues to evolve, these techniques will play a crucial role in developing intelligent, adaptable, and efficient systems. Staying informed about these advanced methodologies and exploring their applications will be key to leveraging the full potential of AI in various industries. Whether it's enhancing healthcare diagnostics, enabling self-driving cars, or creating intelligent customer service bots, transfer learning and reinforcement learning are paving the way for a smarter, more automated future.
#ReinforcementLearning#TransferLearning#DeepLearning#MachineLearning#AI#ArtificialIntelligence#NaturalLanguageProcessing#ImageClassification#Robotics#AutonomousVehicles#PretrainedModels#BERT#AlphaGo#AIResearch#RealTimeAI
1 note
·
View note
Text
#DataScience#MachineLearning#Statistics#AI#BigData#NetflixPrize#AlphaGo#DeepLearning#AIResearch#DataAnalysis#TechnologyEvolution#PredictiveAnalytics#ComputingHistory#DataScienceFuture#EthicalAI
1 note
·
View note
Text
All You Need to Know about Gemini, Google's Response to ChatGPT

As Google releases its new generative AI model called Gemini, Adarsh takes you through everything we know about it so far. Read More. https://www.sify.com/ai-analytics/all-you-need-to-know-about-gemini-googles-response-to-chatgpt/
#ChatGPT#AI#ArtificialIntelligence#Deepmind#AImodel#LargeLanguageModel#LLM#GoogleBard#GoogleGemini#AlphaGo
0 notes
Text
Cloneremo il cervello umano in intelligenze organoidi
Cervello umano: quanti gigabyte o terabyte può memorizzare. Quanti dati può conservare il cervello umano e quali sono i rapporti con l'intelligenza artificiale. L'ulteriore evoluzione rappresentata dall'intelligenza organoide. Cos'è e come funziona. Le capacità di archiviazione di dati da parte del cervello umano sono un argomento complesso e ancora oggetto di ricerca e dibattito scientifico. Non può esistere una misurazione precisa delle capacità di memorizzazione in termini di gigabyte o terabyte perché il cervello umano funziona in modo molto diverso rispetto ad esempio alle memorie digitali. Il cervello umano è un organo incredibilmente complesso composto da miliardi di neuroni interconnessi. Le sue abilità poggiano sulla formazione e il rafforzamento di connessioni sinaptiche tra i neuroni, attraverso processi come l’apprendimento e l’esperienza. Quanti terabyte di dati è in grado di conservare il cervello umano? Tanti esperti, tuttavia, si sono a più riprese prodigati nello sforzo di provare a equiparare in qualche modo il cervello umano con il comportamento dello storage digitale. Uno dei primi esperti ad aver condotto delle stime in tal senso, è stato il professor Robert Birge, chimico pluripremiato che si è ampiamente occupato delle relazioni tra biologia ed elettronica. Già nel lontano 1996, presso la Syracuse University, Birge ha fatto un calcolo approssimativo per poi raffinarlo successivamente, all’Università del Connecticut. Birge ha associato un neurone a un singolo bit (in un altro articolo parliamo di codice binario, bit e byte): una semplice moltiplicazione restituisce un valore, in termini di capacità di storage complessiva del cervello umano, pari a circa 5 TB (Terabyte). In realtà il docente ha pubblicato una serie di osservazioni per stimare che un valore più corretto potrebbe aggirarsi intorno ai 30-40 TB.

Uno studio successivo condotto dai ricercatori del Salk Institute, guidati da Terry Sejnowski, ha stimato che ogni connessione nel cervello potrebbe immagazzinare 10 volte di più di quanto conosciuto fino a quel momento. Tanto che si potrebbe addirittura parlare di Petabyte come misura per valutare il volume di dati che il cervello umano è in grado di stivare. Ne è convinto il professor Thomas Hartung della Johns Hopkins University, che parla di capacità di memorizzazione a livello cerebrale dell’ordine dei 2,5 Petabyte, equivalenti a 2.500 TB (ne parliamo più avanti). Altre indagini simili collocano la capacità del cervello umano tra 50 e 200 TB. Sebbene si tratti di valutazioni puramente teoriche, difficilmente dimostrabili con dati e riscontri fattuali, i valori in gioco ci aiutano a immaginare qual �� la mole di dati che ciascuno di noi può potenzialmente conservare. Soprattutto se il cervello è addestrato a farlo. L’intelligenza artificiale sfida le abilità di memorizzazione del cervello umano Se ai tempi dei floppy disk avevamo a disposizione appena 1,44 MB di capacità di storage e in questi giorni Seagate ha presentato i primi hard disk da 40 Terabyte di capienza, il progresso tecnologico che ha segnato il segmento dello storage è stato davvero vorticoso. Con innovazioni continue sul versante tecnologico. E se si pensa che, come si può fare anche a casa o in ufficio con lo schema RAID ad esempio, la capacità delle singole unità di memorizzazione può essere combinata per creare soluzioni di storage scalabili e con un capienza virtualmente illimitata, il balzo in avanti compiuto negli ultimi anni appare ancora più incredibile. L’avvento delle moderne intelligenze artificiali rappresenta una vera sfida rispetto alle abilità di memorizzazione del cervello umano. Modelli addestrati su miliardi di parametri sono in grado di cogliere in maniera sempre più puntuale l'”essenza” delle interconnessioni tra termini e, addirittura, concetti. Addestramento, parametri utilizzati e abilità di memorizzazione Una maggiore quantità di dati di addestramento risulta particolarmente vantaggiosa per l’intelligenza artificiale. Più dati sono disponibili, più il modello può apprendere da una vasta gamma di esempi e sviluppare una “comprensione” più approfondita dei pattern e delle relazioni all’interno dei dati. La qualità dei dati di addestramento gioca però un ruolo essenziale: informazioni rappresentative e coerenti assicurano un apprendimento migliore del modello e contengono eventuali distorsioni. Mentre il cervello umano ha una capacità di memorizzazione limitata, i sistemi di intelligenza artificiale possono utilizzare dispositivi di memorizzazione digitale per archiviare enormi quantità di dati. I server delle più grandi aziende tecnologiche possono immagazzinare petabyte o addirittura exabyte di dati. Tuttavia, nonostante le capacità di memorizzazione e di elaborazione dati delle intelligenze artificiali, il cervello umano gode di vantaggi significativi in aspetti come la creatività, l’elaborazione di informazioni complesse e l’adattamento ai nuovi contesti. Il cervello umano è in grado di sviluppare connessioni interdisciplinari, apprendere da esperienze emotive e applicare la conoscenza in modi molto più ampi rispetto alle attuali intelligenze artificiali. Importanti novità dovrebbero arrivare con il modello generativo GPT-5 di OpenAI che dovrebbe essere capace di generare e comprendere più modalità di informazione. Un approccio multimodale nell’intelligenza artificiale consente di integrare e combinare le informazioni provenienti da diverse fonti. Si pensi ai diversi canali sensoriali e tipi di dati (testo, immagini, audio, video, informazioni posizionali,…). Il passo successivo: l’intelligenza organoide In questa prima parte del 2023 si è anche iniziato a parlare con maggior convinzione del concetto di intelligenza organoide. Abbiamo recentemente pubblicato una articolo che descrive le potenzialità della bioinformatica, che potrebbe rivelarsi in futuro come una sorta di intelligenza artificiale di nuova generazione. L’intelligenza artificiale ha tratto per lungo tempo ampia ispirazione dai meccanismi di funzionamento del cervello umano. Un approccio che si è rivelato di grande successo. Ma se invece di cercare di rendere le intelligenze artificiali più simili al cervello, andassimo direttamente alla fonte? Lo ha spiegato Thomas Hartung rivelando che team di scienziati impegnati in più discipline stanno lavorando per creare biocomputer rivoluzionari in cui colture tridimensionali di cellule cerebrali, chiamate “organoidi cerebrali”, fungono da hardware biologico. Cosa sono e come funzionano gli organoidi cerebrali Gli organoidi cerebrali condividono aspetti chiave della funzione e della struttura del cervello come i neuroni e altre cellule cerebrali essenziali per le funzioni cognitive come l’apprendimento e la memoria. Trattandosi di strutture 3D, la densità cellulare della coltura risulta aumentata di 1.000 volte rispetto a una soluzione planare: è possibile formare molte più connessioni neurali. “Mentre i computer basati su silicio sono certamente migliori con i numeri, il cervello è molto più efficace nell’apprendimento“, ha spiegato Hartung. “Il cervello non solo apprende meglio ma è anche più efficienti dal punto di vista energetico“. L’esperto cita ad esempio AlphaGo, l’intelligenza artificiale Google DeepMind che – addestrata su centinaia di migliaia di partite – è stata in grado di battere tutti gli umani, anche i campioni dei vari giochi. Ecco, la quantità di energia spesa per addestrare AlphaGo è superiore a quella necessaria per sostenere un adulto attivo per almeno un decennio. “I cervelli hanno anche una straordinaria capacità di immagazzinare informazioni, stimata a 2.500 TB”, ha aggiunto Hartung. “Stiamo raggiungendo i limiti fisici dei computer al silicio perché non possiamo racchiudere più transistor in un minuscolo chip. Ma il cervello è cablato in modo completamente diverso. Ha circa 100 miliardi di neuroni collegati attraverso oltre 10 15 punti di connessione. È un’enorme differenza di potenza rispetto alla nostra tecnologia attuale”. La struttura dei biocomputer con intelligenza organoide L’obiettivo sarà utilizzare gli organoidi cerebrali cresciuti in laboratorio per farli diventare un’intelligenza organoide vera e propria. I test stanno proseguendo in queste settimane: Hartung ha spiegato che gli attuali organoidi cerebrali sono troppo piccoli. Ciascuno contiene circa 50.000 celle; per arrivare a un’intelligenza organoide ne servono 10 milioni. Parallelamente, gli autori stanno anche sviluppando tecnologie per comunicare con gli organoidi: in altre parole, per inviare loro informazioni e leggere quanto prodotto. Gli autori dello studio intendono adattare strumenti provenienti da varie discipline scientifiche, come la bioingegneria e l’apprendimento automatico, nonché progettare nuovi dispositivi di stimolazione e registrazione. “Abbiamo sviluppato un dispositivo di interfaccia cervello-computer che è una sorta di tappo EEG per organoidi, che abbiamo presentato in un articolo pubblicato lo scorso agosto. È un guscio flessibile che è densamente ricoperto da minuscoli elettrodi che possono sia raccogliere segnali dall’organoide, sia trasmettergli segnali”, ha affermato Hartung. Gli autori prevedono che alla fine l’OI integrerà un’ampia gamma di strumenti di stimolazione e registrazione. Questi organizzeranno le interazioni attraverso reti di organoidi interconnessi che implementano calcoli più complessi. Non è fantascienza. Sebbene l’intelligenza organoide sia ancora agli inizi, uno studio pubblicato di recente da uno dei coautori dell’articolo pubblicato su Science, Brett Kagan, Chief Scientific Officer di Cortical Labs, ha offerto una prima interessante panoramica. Il team di Kagan ha dimostrato come una normale coltura di cellule cerebrali piatte possa imparare a giocare al videogioco Pong. “Da qui in poi, si tratta solo di costruire la comunità, gli strumenti e le tecnologie per realizzare il pieno potenziale dell’intelligenza organoide“, ha concluso il professor Hartung. Read the full article
1 note
·
View note
Text
labatut's description of the lee sedol-alphago match is making me cry. we won, once, by a single divine bolt of genius... thank you lee sedol
10 notes
·
View notes
Text
It has to be stated as a defiant position because despite there being "no need to inflict that boredom on other people - other artists," the boredom of a few re: actually doing art or respecting others' work was and still is inflicted on everyone through AI.
And for clarification:
"People who think the lack of autonomy is an interesting artistic statement"? Not when making art they don't. The statement can be about a lack of of autonomy, or about making things themselves despite constraints (which is how most forms of poetry function). Not having autonomy and not making something in the first place is not a statement, it's a lack of statement. Silence isn't speech. Definitionally.
"People who are physically disabled in a way that prevents them from engaging with 'traditional' art" is very exactly no one who would artistically benefit from the plagiarism machine. Watching, hearing, smelling, touching, reading, existing in, just knowing any piece of art in any shape or form is engaging with it. If they can't do that with the rest they can't do it with dall-e. You mean people who physically can't create things but somehow are still able to communicate something to the machine.
And to that:
The robot isn't making them able, it's literally a third party copying people who were able.
It's less involved than ordering at a subway, which doesn't make you a "sandwich maker" even if you decided what to put in it. Just another customer. The process is still handled by someone else, your options are still limited by outside forces, and you still only asked for the ingredients.
It all relies on the assumption that the skills displayed are irrelevant to the end product - that a flawless monochrome is equal in value to a click with the paint bucket tool, since they're the same production. There's a reason why art is considered a creative process, not an end result.
Ultimately, this line of thought about "making art accessible" is about the supposed tragedy of someone having a vision without the skills to realise it. But that was always a solved issue. If you can develop these skills, develop them. If you can't or don't want to, commission someone. They're the only ways for you to actually be involved in the creation. Tweaking a machine until it's "yeah, close enough" isn't involvement. It's boredom. It's not caring about what is there. And for some reason that only applies to a few types of art, hm? If I tweak an android to run faster than Usain Bolt it doesn't make me an athlete. If I input a recipe setting in my Thermomix it doesn't make me a competent cook. Installing an autopilot doesn't make me a great pilot. And with my body I can't be any of these things.... and they're all damn closer to accessibility than midjourney is. You want to know what disabled people need? If I need something fetched - e.g. at the pharmacy - and my joint issues prevent me, then a small, fast robot that knows the way would be great. My eyes aren't good enough to visually check for a number of important things in the kitchen and my brain doesn't process time normally, so an automatic timer for cooking times with things that are already checked everywhere saves me a lot of time and food and health issues. Not a single time have I needed openai to make something. If I draw something, maybe my poor vision shows and I get the colours wrong. I don't have a robot colour-pick for me from the top 10 reposted painters online. It looks the same to me but not to you, and that's a much stronger statement about lack of autonomy than you not seeing it or me not making it. If I write it'll be my author's voice, not predictive text with a non-confrontational, PC-according-to-Silicon-Valley-execs tone. If I decide to try composing it will never be "an epic tune in the style of <insert currently-viral group>". And that's the difference between inspiration and botting.
As gen-AI becomes more normalized (Chappell Roan encouraging it, grifters on the rise, young artists using it), I wanna express how I will never turn to it because it fundamentally bores me to my core. There is no reason for me to want to use gen-AI because I will never want to give up my autonomy in creating art. I never want to become reliant on an inhuman object for expression, least of all if that object is created and controlled by tech companies. I draw not because I want a drawing but because I love the process of drawing. So even in a future where everyone’s accepted it, I’m never gonna sway on this.
#sure deep learning has its uses#but just because there's a shortcut to appearing competent at art doesn't mean that art was ever about shortcuts to surface appearances#this is incredibly different to photography which ALSO IS AN ART#also a universal quality of proper usage of deep learning is that the training sets are honestly sourced and the creators compensated#when applicable#alphago showed you don't really need to go the plagiarism route in the first place#protein folding prediction and cancer cell recognition showed that you can work smarter rather than harder#robotics in art can be and mean so much#but you know what can't? outsourcing the creative outburst to people unrelated to your idea through the means of an algorithm with meta-tags
47K notes
·
View notes
Text
初心 (shoshin) "beginner's mind"

Obviously, the new year is a traditional time to start new things.
Another option is to continue something old, but look at it anew.
In Japanese this is known as 初心 (shoshin), or “Beginner’s Mind”.
初 = for the first time, in the beginning.
This is the same kanji as 初め (hajimé), meaning “for the first time”, as in the common Japanese greeting 初めまして (hajimémashité).
(Note: 初め is not to be confused with 始め, which is pronounced the same as has a very similar meaning.
The difference is that 初 functions like an adverb of time, whereas 始め is more like a verb - as in “to begin”.)
心 = heart, mind
Having a “Beginner’s Mind”, viewing a situation from a fresh perspective, can lead to insight and innovation.
An example of this is the success of the go-playing AI program AlphaGo.
The Asian board game go (known in Japan as igo), is well known for having so many permutations of moves (apparently more than the number of atoms in the universe) that programming a computer that could beat a human player was long considered the holy grail of AI.
When AlphaGo eventually beat a human player, it used moves which humans would consider deeply eccentric, and at one point it made a move which no go experts had ever seen before.
What allowed the AI to win wasn’t necessarily the computational power, although this was immense. It was the fact that the machine taught itself to play from scratch, without being taught by a human who would necessarily be steeped in thousands of years of go history, culture and tradition.
Instead of going along with the preconceived narrative of how go should be played it used its own ideas with few fixed beliefs to get in the way.
Sometimes, less knowledge can be a good thing.
#japanese language#japan#japanese culture#書道#japanese#japanese calligraphy#calligraphy#japanese art#kanji#japanese langblr
30 notes
·
View notes
Text
Recent advances in artificial intelligence (AI) have generalized the use of large language models in our society, in areas such as education, science, medicine, art, and finance, among many others. These models are increasingly present in our daily lives. However, they are not as reliable as users expect. This is the conclusion of a study led by a team from the VRAIN Institute of the Universitat Politècnica de València (UPV) and the Valencian School of Postgraduate Studies and Artificial Intelligence Research Network (ValgrAI), together with the University of Cambridge, published today in the journal Nature. The work reveals an “alarming” trend: compared to the first models, and considering certain aspects, reliability has worsened in the most recent models (GPT-4 compared to GPT-3, for example). According to José Hernández Orallo, researcher at the Valencian Research Institute in Artificial Intelligence (VRAIN) of the UPV and ValgrAI, one of the main concerns about the reliability of language models is that their performance does not align with the human perception of task difficulty. In other words, there is a discrepancy between expectations that models will fail according to human perception of task difficulty and the tasks where models actually fail. “Models can solve certain complex tasks according to human abilities, but at the same time fail in simple tasks in the same domain. For example, they can solve several doctoral-level mathematical problems, but can make mistakes in a simple addition,” points out Hernández-Orallo. In 2022, Ilya Sutskever, the scientist behind some of the biggest advances in artificial intelligence in recent years (from the Imagenet solution to AlphaGo) and co-founder of OpenAI, predicted that “perhaps over time that discrepancy will diminish.” However, the study by the UPV, ValgrAI, and University of Cambridge team shows that this has not been the case. To demonstrate this, they investigated three key aspects that affect the reliability of language models from a human perspective.
25 September 2024
50 notes
·
View notes