#Generative AI learning projects
Explore tagged Tumblr posts
Text

Home Automation Electronics Kit
Discover the fascinating world of smart home technology with this interactive learning kit, designed to spark curiosity in young minds. Combining the versatile ESP32 board with engaging story-based activities, this kit immerses children in the exciting world of home automation. Packed with a variety of sensor modules and programming tools, it allows young learners to build, experiment, and protect a smart home while honing essential STEM skills. Perfect for nurturing creativity and problem-solving abilities, this hands-on kit offers a fun and accessible introduction to the future of technology. Ready to dive in? Click the link to learn more and make your purchase!
#Home Automation Electronics Kit#Python and AI learning kit#Raspberry Pi AI kit#AI-powered electronics projects#AI learning kits#Robotics kit for beginners#Build your own AI assistant#DIY AI learning kit#Machine learning for beginners#Generative AI learning projects
0 notes
Video
youtube
Back Cover to AI Art S3E11 - The Journeyman Project 3 Legacy of Time
Older video games were notorious for back cover descriptions that have nothing to do with the game so let's see what a text-to-image generator makes of these descriptions. each episode of Back Cover to AI Art Season 3 will feature 4 ai art creations for each game.
1. Intro - 00:00 2. Back Cover and Text Description - 00:10 3. Creation 1 - 00:30 4. Creation 2 - 01:00 5. Creation 3 - 01:30 6. Creation 4 - 02:00 7. Outro – 02:30
The Journeyman Project 3 Legacy of Time You are Gage Blackwood, TSA Agent 5, charged with guarding the sanctity of history. As you embark on an urgent mission to investigate a time distortion hidden deep in the past, you discover a recorded message from the fugitive Agent 3 that leads you on a race against time. Somewhere in the past, within three mythical cities long since destroyed, lie the clues you need to save the future from destruction.
🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺
The Journeyman Project 3 Legacy of Time is a sci-fi first person live action adventure game developed by Presto Studios and published by Red Orb Entertainment for Windows and Mac.
The Journeyman Project 3 Legacy of Time is the final entry in the The Journeyman Project trilogy released from 1993 to 1998. The fill trilogy would be released together in 1999 with a 10th anniversary edition releasing in 2009 on Mac.
🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺🕵️♂️🧩🏺
For more Back Cover to AI Art videos check out these playlists
Season 1 of Back Cover to AI Art https://www.youtube.com/playlist?list=PLFJOZYl1h1CGhd82prEQGWAVxY3wuQlx3
Season 2 of Back Cover to AI Art https://www.youtube.com/playlist?list=PLFJOZYl1h1CEdLNgql_n-7b20wZwo_yAD
Season 3 of Back Cover to AI Art https://www.youtube.com/playlist?list=PLFJOZYl1h1CHAkMAVlNiJUFVkQMeFUeTX
#youtube#ai#ai art#ai art community#digital art#back cover#back cover description#the journeyman project 3 legacy of time#sci-fi#presto studios#live action#first person#red orb entertainment#windows#mac#ai generated#ai art generation#machine learning#artificial intelligence#back cover to ai art
0 notes
Text
AI Consulting Business in Construction: Transforming the Industry
The construction industry is experiencing a profound transformation due to the integration of artificial intelligence (AI). The AI consulting business is at the forefront of this change, guiding construction firms in optimizing operations, enhancing safety, and improving project outcomes. This article explores various applications of AI in construction, supported by examples and statistics that…
#AI Consulting Business#AI in Construction#AI Technologies#artificial intelligence#Big Data Analytics#Construction Automation#construction efficiency#construction industry#Construction Safety#construction sustainability#Data Science#Generative Design#IoT Technologies#Labor Optimization#Machine Learning#Predictive Analytics#project management#quality control#Robotics#Safety Monitoring
0 notes
Text
If anyone wants to know why every tech company in the world right now is clamoring for AI like drowned rats scrabbling to board a ship, I decided to make a post to explain what's happening.
(Disclaimer to start: I'm a software engineer who's been employed full time since 2018. I am not a historian nor an overconfident Youtube essayist, so this post is my working knowledge of what I see around me and the logical bridges between pieces.)
Okay anyway. The explanation starts further back than what's going on now. I'm gonna start with the year 2000. The Dot Com Bubble just spectacularly burst. The model of "we get the users first, we learn how to profit off them later" went out in a no-money-having bang (remember this, it will be relevant later). A lot of money was lost. A lot of people ended up out of a job. A lot of startup companies went under. Investors left with a sour taste in their mouth and, in general, investment in the internet stayed pretty cooled for that decade. This was, in my opinion, very good for the internet as it was an era not suffocating under the grip of mega-corporation oligarchs and was, instead, filled with Club Penguin and I Can Haz Cheezburger websites.
Then around the 2010-2012 years, a few things happened. Interest rates got low, and then lower. Facebook got huge. The iPhone took off. And suddenly there was a huge new potential market of internet users and phone-havers, and the cheap money was available to start backing new tech startup companies trying to hop on this opportunity. Companies like Uber, Netflix, and Amazon either started in this time, or hit their ramp-up in these years by shifting focus to the internet and apps.
Now, every start-up tech company dreaming of being the next big thing has one thing in common: they need to start off by getting themselves massively in debt. Because before you can turn a profit you need to first spend money on employees and spend money on equipment and spend money on data centers and spend money on advertising and spend money on scale and and and
But also, everyone wants to be on the ship for The Next Big Thing that takes off to the moon.
So there is a mutual interest between new tech companies, and venture capitalists who are willing to invest $$$ into said new tech companies. Because if the venture capitalists can identify a prize pig and get in early, that money could come back to them 100-fold or 1,000-fold. In fact it hardly matters if they invest in 10 or 20 total bust projects along the way to find that unicorn.
But also, becoming profitable takes time. And that might mean being in debt for a long long time before that rocket ship takes off to make everyone onboard a gazzilionaire.
But luckily, for tech startup bros and venture capitalists, being in debt in the 2010's was cheap, and it only got cheaper between 2010 and 2020. If people could secure loans for ~3% or 4% annual interest, well then a $100,000 loan only really costs $3,000 of interest a year to keep afloat. And if inflation is higher than that or at least similar, you're still beating the system.
So from 2010 through early 2022, times were good for tech companies. Startups could take off with massive growth, showing massive potential for something, and venture capitalists would throw infinite money at them in the hopes of pegging just one winner who will take off. And supporting the struggling investments or the long-haulers remained pretty cheap to keep funding.
You hear constantly about "Such and such app has 10-bazillion users gained over the last 10 years and has never once been profitable", yet the thing keeps chugging along because the investors backing it aren't stressed about the immediate future, and are still banking on that "eventually" when it learns how to really monetize its users and turn that profit.
The pandemic in 2020 took a magnifying-glass-in-the-sun effect to this, as EVERYTHING was forcibly turned online which pumped a ton of money and workers into tech investment. Simultaneously, money got really REALLY cheap, bottoming out with historic lows for interest rates.
Then the tide changed with the massive inflation that struck late 2021. Because this all-gas no-brakes state of things was also contributing to off-the-rails inflation (along with your standard-fare greedflation and price gouging, given the extremely convenient excuses of pandemic hardships and supply chain issues). The federal reserve whipped out interest rate hikes to try to curb this huge inflation, which is like a fire extinguisher dousing and suffocating your really-cool, actively-on-fire party where everyone else is burning but you're in the pool. And then they did this more, and then more. And the financial climate followed suit. And suddenly money was not cheap anymore, and new loans became expensive, because loans that used to compound at 2% a year are now compounding at 7 or 8% which, in the language of compounding, is a HUGE difference. A $100,000 loan at a 2% interest rate, if not repaid a single cent in 10 years, accrues to $121,899. A $100,000 loan at an 8% interest rate, if not repaid a single cent in 10 years, more than doubles to $215,892.
Now it is scary and risky to throw money at "could eventually be profitable" tech companies. Now investors are watching companies burn through their current funding and, when the companies come back asking for more, investors are tightening their coin purses instead. The bill is coming due. The free money is drying up and companies are under compounding pressure to produce a profit for their waiting investors who are now done waiting.
You get enshittification. You get quality going down and price going up. You get "now that you're a captive audience here, we're forcing ads or we're forcing subscriptions on you." Don't get me wrong, the plan was ALWAYS to monetize the users. It's just that it's come earlier than expected, with way more feet-to-the-fire than these companies were expecting. ESPECIALLY with Wall Street as the other factor in funding (public) companies, where Wall Street exhibits roughly the same temperament as a baby screaming crying upset that it's soiled its own diaper (maybe that's too mean a comparison to babies), and now companies are being put through the wringer for anything LESS than infinite growth that Wall Street demands of them.
Internal to the tech industry, you get MASSIVE wide-spread layoffs. You get an industry that used to be easy to land multiple job offers shriveling up and leaving recent graduates in a desperately awful situation where no company is hiring and the market is flooded with laid-off workers trying to get back on their feet.
Because those coin-purse-clutching investors DO love virtue-signaling efforts from companies that say "See! We're not being frivolous with your money! We only spend on the essentials." And this is true even for MASSIVE, PROFITABLE companies, because those companies' value is based on the Rich Person Feeling Graph (their stock) rather than the literal profit money. A company making a genuine gazillion dollars a year still tears through layoffs and freezes hiring and removes the free batteries from the printer room (totally not speaking from experience, surely) because the investors LOVE when you cut costs and take away employee perks. The "beer on tap, ping pong table in the common area" era of tech is drying up. And we're still unionless.
Never mind that last part.
And then in early 2023, AI (more specifically, Chat-GPT which is OpenAI's Large Language Model creation) tears its way into the tech scene with a meteor's amount of momentum. Here's Microsoft's prize pig, which it invested heavily in and is galivanting around the pig-show with, to the desperate jealousy and rapture of every other tech company and investor wishing it had that pig. And for the first time since the interest rate hikes, investors have dollar signs in their eyes, both venture capital and Wall Street alike. They're willing to restart the hose of money (even with the new risk) because this feels big enough for them to take the risk.
Now all these companies, who were in varying stages of sweating as their bill came due, or wringing their hands as their stock prices tanked, see a single glorious gold-plated rocket up out of here, the likes of which haven't been seen since the free money days. It's their ticket to buy time, and buy investors, and say "see THIS is what will wring money forth, finally, we promise, just let us show you."
To be clear, AI is NOT profitable yet. It's a money-sink. Perhaps a money-black-hole. But everyone in the space is so wowed by it that there is a wide-spread and powerful conviction that it will become profitable and earn its keep. (Let's be real, half of that profit "potential" is the promise of automating away jobs of pesky employees who peskily cost money.) It's a tech-space industrial revolution that will automate away skilled jobs, and getting in on the ground floor is the absolute best thing you can do to get your pie slice's worth.
It's the thing that will win investors back. It's the thing that will get the investment money coming in again (or, get it second-hand if the company can be the PROVIDER of something needed for AI, which other companies with venture-back will pay handsomely for). It's the thing companies are terrified of missing out on, lest it leave them utterly irrelevant in a future where not having AI-integration is like not having a mobile phone app for your company or not having a website.
So I guess to reiterate on my earlier point:
Drowned rats. Swimming to the one ship in sight.
36K notes
·
View notes
Text
AI Tools: What They Are and How They Transform the Future
Artificial Intelligence (AI) tools are revolutionizing various industries, from healthcare to finance, by automating processes, enhancing decision-making, and providing innovative solutions. In this article, we'll delve into what AI tools are, their applications, the emergence of generative AI tools, and how you can start your AI learning journey in Vasai-Virar.
What Are AI Tools?
AI tools are software applications that leverage artificial intelligence techniques, such as machine learning, natural language processing, and computer vision, to perform tasks that typically require human intelligence. These tools can analyze data, recognize patterns, make predictions, and even interact with humans in natural language.
What Are AI Tools Used For?
AI tools have a wide range of applications across various sectors:
Healthcare: Diagnosing diseases, personalizing treatment plans, and predicting patient outcomes.
Finance: Fraud detection, algorithmic trading, and customer service automation.
Marketing: Personalizing advertisements, predicting customer behavior, and analyzing market trends.
Education: Personalized learning, automated grading, and content creation.
What Are Generative AI Tools?
Generative AI tools are a subset of AI tools that create new content, such as text, images, and music, by learning patterns from existing data. Examples include:
Chatbots: Generating human-like responses in conversations.
Art Generators: Creating unique pieces of art or design elements.
Content Creation Tools: Writing articles, stories, or marketing copy.
What Is the Best AI Tool?
The "best" AI tool depends on your specific needs and industry. Some of the most popular AI tools include:
TensorFlow: An open-source platform for machine learning.
PyTorch: A deep learning framework used for developing AI models.
IBM Watson: An AI platform for natural language processing and machine learning.
What Is AI Tools ChatGPT?
ChatGPT is an AI tool developed by OpenAI that uses the GPT (Generative Pre-trained Transformer) model to generate human-like text based on the input it receives. It can be used for various applications, such as customer service chatbots, content creation, and virtual assistants.
AI Project-Based Learning in Vasai-Virar
Project-based learning is an effective way to understand AI tools. In Vasai-Virar, there are several opportunities to engage in AI projects, from developing chatbots to creating predictive models. This hands-on approach ensures you gain practical experience and a deeper understanding of AI.
AI Application Training in Vasai-Virar
Training programs in Vasai-Virar focus on the practical applications of AI, teaching you how to implement AI tools in real-world scenarios. These courses often cover:
Machine learning algorithms
Data analysis
Natural language processing
AI model deployment
AI Technology Courses in Vasai-Virar
AI technology courses in Vasai-Virar provide comprehensive education on AI concepts, tools, and techniques. These courses are designed for beginners as well as professionals looking to enhance their skills. Topics covered include:
Introduction to AI and machine learning
Python programming for AI
AI ethics and societal impacts
Advanced AI topics like deep learning and neural networks
Where to Learn AI
AI courses are available online and offline, through universities, private institutions, and online platforms such as Coursera, edX, and Udacity. In Vasai-Virar, Hrishi Computer Education offers specialized AI courses tailored to local needs.
Who Can Learn AI?
AI is a versatile field open to anyone with an interest in technology and data. It is particularly suited for:
Students pursuing degrees in computer science or related fields
Professionals looking to upskill
Entrepreneurs aiming to integrate AI into their businesses
Can I Learn AI on My Own?
Yes, with the plethora of online resources, it is possible to learn AI independently. Online courses, tutorials, and textbooks provide a structured path for self-learners.
How Long Does It Take to Learn AI?
The time it takes to learn AI varies based on your background and the depth of knowledge you seek. A basic understanding can be achieved in a few months, while becoming proficient might take a year or more of dedicated study and practice.
How to Learn AI from Scratch
Start with the Basics: Learn programming languages like Python.
Study Machine Learning: Understand algorithms and how they work.
Hands-On Projects: Apply your knowledge through real-world projects.
Advanced Topics: Dive into deep learning, neural networks, and AI ethics.
Continuous Learning: Stay updated with the latest advancements in AI.
Is AI Hard to Learn?
Learning AI can be challenging due to its complex concepts and the mathematical foundations required. However, with dedication, practice, and the right resources, it is certainly achievable.
Call to Action
If you want to learn AI and become proficient in using AI tools, enroll now in our AI Tools Course Vasai-Virar at Hrishi Computer Education. Gain hands-on experience and transform your career with our comprehensive AI training.
#what is ai tools#what is ai tools used for#what is generative ai tools#what is the best ai tool#what is ai tools chatgpt#AI project-based learning Vasai Virar#AI application training Vasai Virar#AI technology course Vasai Virar#where to learn ai#who can learn ai#can i learn ai on my own#how long does it take to learn ai#learn ai#how to learn ai from scratch#is ai hard to learn
0 notes
Text

50 artificial intelligence icons. Check them out!
If you like them, tell a friend ♡ Side Project
#artifical intelligence#AI#AI icons#icons#icon design#machine learning#deep learning#neural network#generative AI#robotics#robot#virtual assistant#technology icons#graphic design#illustration#Side Project
1 note
·
View note
Text
youtube
In an era where technology continues to redefine our daily lives, artificial intelligence (AI) has emerged as a powerful ally in managing personal finances. This groundbreaking technology offers a myriad of tools and solutions that not only streamline financial tasks but also empower individuals to make informed decisions about their money. In this video, we're going to talk about some financial AI tools and how artificial intelligence can help you manage your personal finances.
Imagine having a virtual financial advisor at your fingertips, available 24/7 to analyze your spending patterns, assess investment opportunities, and provide tailored recommendations based on your unique financial goals. This is the promise of Financial AI tools – a comprehensive suite of applications and platforms designed to enhance every aspect of personal finance.
At the core of these tools is the ability to process vast amounts of data with speed and precision, allowing for a deep understanding of individual financial landscapes. Through machine learning algorithms, these tools can identify patterns, trends, and anomalies in your financial behavior, enabling them to offer proactive and insightful advice. This level of analysis goes beyond traditional budgeting, providing users with a dynamic and evolving financial strategy that adapts to changing circumstances.
Budgeting is often considered the cornerstone of sound financial management, and Financial AI tools excel in this domain. By tracking expenses, categorizing transactions, and identifying areas for potential savings, these tools streamline the budgeting process. Moreover, they can predict future expenses, helping users plan for upcoming financial obligations and avoid unpleasant surprises.
Investment management is another arena where Financial AI tools truly shine. From robo-advisors that construct diversified portfolios based on your risk tolerance and financial goals to predictive analytics that assess market trends, these tools leverage AI to optimize investment strategies. They provide real-time market insights, allowing users to make informed decisions and capitalize on investment opportunities while mitigating risks.
One of the key advantages of Financial AI tools is their ability to automate mundane financial tasks. From bill payments to savings transfers, these tools can execute routine transactions on your behalf, freeing up time for more strategic financial planning. Automation also contributes to financial discipline, ensuring that users adhere to their budgetary constraints and savings goals.
Personalized financial guidance is a hallmark of these AI tools. By considering your unique financial situation, future aspirations, and risk tolerance, they can offer targeted advice that aligns with your objectives. This level of personalization goes beyond one-size-fits-all financial advice, providing a tailored experience that empowers users to make decisions aligned with their individual circumstances.
Financial AI tools: How Artificial Intelligence Can Help You Manage Your Personal Finances
#financial ai tools#how artificial intelligence can help you manage your personal finances#personal finance#artificial intelligence#ai#how manage your personal finances#personal finance management#business planning with ai#chatgpt#business plan#ai plan generators#best project management software#chat gpt#open ai#machine learning#ai accounting tools#limitless tech#finance ai tool#ai in fintech#fintech#ai tools for business#ai use in fintech#make money online#Youtube
0 notes
Text
youtube
In an era where technology continues to redefine our daily lives, artificial intelligence (AI) has emerged as a powerful ally in managing personal finances. This groundbreaking technology offers a myriad of tools and solutions that not only streamline financial tasks but also empower individuals to make informed decisions about their money. In this video, we're going to talk about some financial AI tools and how artificial intelligence can help you manage your personal finances.
Imagine having a virtual financial advisor at your fingertips, available 24/7 to analyze your spending patterns, assess investment opportunities, and provide tailored recommendations based on your unique financial goals. This is the promise of Financial AI tools – a comprehensive suite of applications and platforms designed to enhance every aspect of personal finance.
At the core of these tools is the ability to process vast amounts of data with speed and precision, allowing for a deep understanding of individual financial landscapes. Through machine learning algorithms, these tools can identify patterns, trends, and anomalies in your financial behavior, enabling them to offer proactive and insightful advice. This level of analysis goes beyond traditional budgeting, providing users with a dynamic and evolving financial strategy that adapts to changing circumstances.
Budgeting is often considered the cornerstone of sound financial management, and Financial AI tools excel in this domain. By tracking expenses, categorizing transactions, and identifying areas for potential savings, these tools streamline the budgeting process. Moreover, they can predict future expenses, helping users plan for upcoming financial obligations and avoid unpleasant surprises.
Investment management is another arena where Financial AI tools truly shine. From robo-advisors that construct diversified portfolios based on your risk tolerance and financial goals to predictive analytics that assess market trends, these tools leverage AI to optimize investment strategies. They provide real-time market insights, allowing users to make informed decisions and capitalize on investment opportunities while mitigating risks.
One of the key advantages of Financial AI tools is their ability to automate mundane financial tasks. From bill payments to savings transfers, these tools can execute routine transactions on your behalf, freeing up time for more strategic financial planning. Automation also contributes to financial discipline, ensuring that users adhere to their budgetary constraints and savings goals.
Personalized financial guidance is a hallmark of these AI tools. By considering your unique financial situation, future aspirations, and risk tolerance, they can offer targeted advice that aligns with your objectives. This level of personalization goes beyond one-size-fits-all financial advice, providing a tailored experience that empowers users to make decisions aligned with their individual circumstances.
Financial AI tools: How Artificial Intelligence Can Help You Manage Your Personal Finances
#financial ai tools#how artificial intelligence can help you manage your personal finances#personal finance#artificial intelligence#ai#how manage your personal finances#personal finance management#business planning with ai#chatgpt#business plan#ai plan generators#best project management software#chat gpt#open ai#machine learning#ai accounting tools#limitless tech#finance ai tool#ai in fintech#fintech#ai tools for business#ai use in fintech#make money online#Youtube
0 notes
Text
The Gnosis Series - Episode 14 - The Synergy of Project Management & Generative AI (Unlocking new possibilities)
Welcome to the Episode 14 of “The Gnosis Series” !! Gnosis is a Greek word for Knowledge. This series includes short articles on Leadership, Management, Project Management and many more to enlighten / motivate my readers. Project management has long been a crucial discipline in ensuring successful outcomes for businesses and organizations across various industries. With the advent of generative…

View On WordPress
#atheneum#blog#blogger#blogging#blogging community#blogs#business#coaching#GenAI#Generative AI#gnosis series#leadership#learning#life#management#management atheneum#PMI#program management#project management#project manager#projects#session#success#webinar#wordpress#writer
0 notes
Note
what’s the story about the generative power model and water consumption? /gen
There's this myth going around about generative AI consuming truly ridiculous amount of power and water. You'll see people say shit like "generating one image is like just pouring a whole cup of water out into the Sahara!" and bullshit like that, and it's just... not true. The actual truth is that supercomputers, which do a lot of stuff, use a lot of power, and at one point someone released an estimate of how much power some supercomputers were using and people went "oh, that supercomputer must only do AI! All generative AI uses this much power!" and then just... made shit up re: how making an image sucks up a huge chunk of the power grid or something. Which makes no sense because I'm given to understand that many of these models can run on your home computer. (I don't use them so I don't know the details, but I'm told by users that you can download them and generate images locally.) Using these models uses far less power than, say, online gaming. Or using Tumblr. But nobody ever talks about how evil those things are because of their power generation. I wonder why.
To be clear, I don't like generative AI. I'm sure it's got uses in research and stuff but on the consumer side, every effect I've seen of it is bad. Its implementation in products that I use has always made those products worse. The books it writes and flood the market with are incoherent nonsense at best and dangerous at worst (let's not forget that mushroom foraging guide). It's turned the usability of search engines from "rapidly declining, but still usable if you can get past the ads" into "almost one hundred per cent useless now, actually not worth the effort to de-bullshittify your search results", especially if you're looking for images. It's a tool for doing bullshit that people were already doing much easier and faster, thus massively increasing the amount of bullshit. The only consumer-useful uses I've seen of it as a consumer are niche art projects, usually projects that explore the limits of the tool itself like that one poetry book or the Infinite Art Machine; overall I'd say its impact at the Casual Random Person (me) level has been overwhelmingly negative. Also, the fact that so much AI turns out to be underpaid people in a warehouse in some country with no minimum wage and terrible labour protections is... not great. And the fact that it's often used as an excuse to try to find ways to underpay professionals ("you don't have to write it, just clean up what the AI came up with!") is also not great.
But there are real labour and product quality concerns with generative AI, and there's hysterical bullshit. And the whole "AI is magically destroying the planet via climate change but my four hour twitch streaming sesh isn't" thing is hysterical bullshit. The instant I see somebody make this stupid claim I put them in the same mental bucket as somebody complaining about AI not being "real art" -- a hatemobber hopping on the hype train of a new thing to hate and feel like an enlightened activist about when they haven't bothered to learn a fucking thing about the issue. And I just count my blessings that they fell in with this group instead of becoming a flat earther or something.
2K notes
·
View notes
Text




Ai Photos and Information.
#about ai#ai#ai art#ai projects#machine learning#ai creativity#artificial intelligence#ai and data science#ai generated#ai artwork#technology#ai image
1 note
·
View note
Text
There is no such thing as AI.
How to help the non technical and less online people in your life navigate the latest techbro grift.
I've seen other people say stuff to this effect but it's worth reiterating. Today in class, my professor was talking about a news article where a celebrity's likeness was used in an ai image without their permission. Then she mentioned a guest lecture about how AI is going to help finance professionals. Then I pointed out, those two things aren't really related.
The term AI is being used to obfuscate details about multiple semi-related technologies.
Traditionally in sci-fi, AI means artificial general intelligence like Data from star trek, or the terminator. This, I shouldn't need to say, doesn't exist. Techbros use the term AI to trick investors into funding their projects. It's largely a grift.
What is the term AI being used to obfuscate?
If you want to help the less online and less tech literate people in your life navigate the hype around AI, the best way to do it is to encourage them to change their language around AI topics.
By calling these technologies what they really are, and encouraging the people around us to know the real names, we can help lift the veil, kill the hype, and keep people safe from scams. Here are some starting points, which I am just pulling from Wikipedia. I'd highly encourage you to do your own research.
Machine learning (ML): is an umbrella term for solving problems for which development of algorithms by human programmers would be cost-prohibitive, and instead the problems are solved by helping machines "discover" their "own" algorithms, without needing to be explicitly told what to do by any human-developed algorithms. (This is the basis of most technologically people call AI)
Language model: (LM or LLM) is a probabilistic model of a natural language that can generate probabilities of a series of words, based on text corpora in one or multiple languages it was trained on. (This would be your ChatGPT.)
Generative adversarial network (GAN): is a class of machine learning framework and a prominent framework for approaching generative AI. In a GAN, two neural networks contest with each other in the form of a zero-sum game, where one agent's gain is another agent's loss. (This is the source of some AI images and deepfakes.)
Diffusion Models: Models that generate the probability distribution of a given dataset. In image generation, a neural network is trained to denoise images with added gaussian noise by learning to remove the noise. After the training is complete, it can then be used for image generation by starting with a random noise image and denoise that. (This is the more common technology behind AI images, including Dall-E and Stable Diffusion. I added this one to the post after as it was brought to my attention it is now more common than GANs.)
I know these terms are more technical, but they are also more accurate, and they can easily be explained in a way non-technical people can understand. The grifters are using language to give this technology its power, so we can use language to take it's power away and let people see it for what it really is.
12K notes
·
View notes
Note
Why reblog machine-generated art?
When I was ten years old I took a photography class where we developed black and white photos by projecting light on papers bathed in chemicals. If we wanted to change something in the image, we had to go through a gradual, arduous process called dodging and burning.
When I was fifteen years old I used photoshop for the first time, and I remember clicking on the clone tool or the blur tool and feeling like I was cheating.
When I was twenty eight I got my first smartphone. The phone could edit photos. A few taps with my thumb were enough to apply filters and change contrast and even spot correct. I was holding in my hand something more powerful than the huge light machines I'd first used to edit images.
When I was thirty six, just a few weeks ago, I took a photo class that used Lightroom Classic and again, it felt like cheating. It made me really understand how much the color profiles of popular web images I'd been seeing for years had been pumped and tweaked and layered with local edits to make something that, to my eyes, didn't much resemble photography. To me, photography is light on paper. It's what you capture in the lens. It's not automatic skin smoothing and a local filter to boost the sky. This reminded me a lot more of the photomanipulations my friend used to make on deviantart; layered things with unnatural colors that put wings on buildings or turned an eye into a swimming pool. It didn't remake the images to that extent, obviously, but it tipped into the uncanny valley. More real than real, more saturated more sharp and more present than the actual world my lens saw. And that was before I found the AI assisted filters and the tool that would identify the whole sky for you, picking pieces of it out from between leaves.
You know, it's funny, when people talk about artists who might lose their jobs to AI they don't talk about the people who have already had to move on from their photo editing work because of technology. You used to be able to get paid for basic photo manipulation, you know? If you were quick with a lasso or skilled with masks you could get a pretty decent chunk of change by pulling subjects out of backgrounds for family holiday cards or isolating the pies on the menu for a mom and pop. Not a lot, but enough to help. But, of course, you can just do that on your phone now. There's no need to pay a human for it, even if they might do a better job or be more considerate toward the aesthetic of an image.
And they certainly don't talk about all the development labs that went away, or the way that you could have trained to be a studio photographer if you wanted to take good photos of your family to hang on the walls and that digital photography allowed in a parade of amateurs who can make dozens of iterations of the same bad photo until they hit on a good one by sheer volume and luck; if you want to be a good photographer everyone can do that why didn't you train for it and spend a long time taking photos on film and being okay with bad photography don't you know that digital photography drove thousands of people out of their jobs.
My dad told me that he plays with AI the other day. He hosts a movie podcast and he puts up thumbnails for the downloads. In the past, he'd just take a screengrab from the film. Now he tells the Bing AI to make him little vignettes. A cowboy running away from a rhino, a dragon arm-wrestling a teddy bear. That kind of thing. Usually based on a joke that was made on the show, or about the subject of the film and an interest of the guest.
People talk about "well AI art doesn't allow people to create things, people were already able to create things, if they wanted to create things they should learn to create things." Not everyone wants to make good art that's creative. Even fewer people want to put the effort into making bad art for something that they aren't passionate about. Some people want filler to go on the cover of their youtube video. My dad isn't going to learn to draw, and as the person who he used to ask to photoshop him as Ant-Man because he certainly couldn't pay anyone for that kind of thing, I think this is a great use case for AI art. This senior citizen isn't going to start cartooning and at two recordings a week with a one-day editing turnaround he doesn't even really have the time for something like a Fiverr commission. This is a great use of AI art, actually.
I also know an artist who is going Hog Fucking Wild creating AI art of their blorbos. They're genuinely an incredibly talented artist who happens to want to see their niche interest represented visually without having to draw it all themself. They're posting the funny and good results to a small circle of mutuals on socials with clear information about the source of the images; they aren't trying to sell any of the images, they're basically using them as inserts for custom memes. Who is harmed by this person saying "i would like to see my blorbo lasciviously eating an ice cream cone in the is this a pigeon meme"?
The way I use machine-generated art, as an artist, is to proof things. Can I get an explosion to look like this. What would a wall of dead computer monitors look like. Would a ballerina leaping over the grand canyon look cool? Sometimes I use AI art to generate copyright free objects that I can snip for a collage. A lot of the time I use it to generate ideas. I start naming random things and seeing what it shows me and I start getting inspired. I can ask CrAIon for pose reference, I can ask it to show me the interior of spaces from a specific angle.
I profoundly dislike the antipathy that tumblr has for AI art. I understand if people don't want their art used in training pools. I understand if people don't want AI trained on their art to mimic their style. You should absolutely use those tools that poison datasets if you don't want your art included in AI training. I think that's an incredibly appropriate action to take as an artist who doesn't want AI learning from your work.
However I'm pretty fucking aggressively opposed to copyright and most of the "solid" arguments against AI art come down to "the AIs viewed and learned from people's copyrighted artwork and therefore AI is theft rather than fair use" and that's a losing argument for me. In. Like. A lot of ways. Primarily because it is saying that not only is copying someone's art theft, it is saying that looking at and learning from someone's art can be defined as theft rather than fair use.
Also because it's just patently untrue.
But that doesn't really answer your question. Why reblog machine-generated art? Because I liked that piece of art.
It was made by a machine that had looked at billions of images - some copyrighted, some not, some new, some old, some interesting, many boring - and guided by a human and I liked it. It was pretty. It communicated something to me. I looked at an image a machine made - an artificial picture, a total construct, something with no intrinsic meaning - and I felt a sense of quiet and loss and nostalgia. I looked at a collection of automatically arranged pixels and tasted salt and smelled the humidity in the air.
I liked it.
I don't think that all AI art is ugly. I don't think that AI art is all soulless (i actually think that 'having soul' is a bizarre descriptor for art and that lacking soul is an equally bizarre criticism). I don't think that AI art is bad for artists. I think the problem that people have with AI art is capitalism and I don't think that's a problem that can really be laid at the feet of people curating an aesthetic AI art blog on tumblr.
Machine learning isn't the fucking problem the problem is massive corporations have been trying hard not to pay artists for as long as massive corporations have existed (isn't that a b-plot in the shape of water? the neighbor who draws ads gets pushed out of his job by product photography? did you know that as recently as ten years ago NewEgg had in-house photographers who would take pictures of the products so users wouldn't have to rely on the manufacturer photos? I want you to guess what killed that job and I'll give you a hint: it wasn't AI)
Am I putting a human out of a job because I reblogged an AI-generated "photo" of curtains waving in the pale green waters of an imaginary beach? Who would have taken this photo of a place that doesn't exist? Who would have painted this hypersurrealistic image? What meaning would it have had if they had painted it or would it have just been for the aesthetic? Would someone have paid for it or would it be like so many of the things that artists on this site have spent dozens of hours on only to get no attention or value for their work?
My worst ratio of hours to notes is an 8-page hand-drawn detailed ink comic about getting assaulted at a concert and the complicated feelings that evoked that took me weeks of daily drawing after work with something like 54 notes after 8 years; should I be offended if something generated from a prompt has more notes than me? What does that actually get the blogger? Clout? I believe someone said that popularity on tumblr gets you one thing and that is yelled at.
What do you get out of this? Are you helping artists right now? You're helping me, and I'm an artist. I've wanted to unload this opinion for a while because I'm sick of the argument that all Real Artists think AI is bullshit. I'm a Real Artist. I've been paid for Real Art. I've been commissioned as an artist.
And I find a hell of a lot of AI art a lot more interesting than I find human-generated corporate art or Thomas Kincaid (but then, I repeat myself).
There are plenty of people who don't like AI art and don't want to interact with it. I am not one of those people. I thought the gay sex cats were funny and looked good and that shitposting is the ideal use of a machine image generation: to make uncopyrightable images to laugh at.
I think that tumblr has decided to take a principled stand against something that most people making the argument don't understand. I think tumblr's loathing for AI has, generally speaking, thrown weight behind a bunch of ideas that I think are going to be incredibly harmful *to artists specifically* in the long run.
Anyway. If you hate AI art and you don't want to interact with people who interact with it, block me.
5K notes
·
View notes
Text
On Skylar
Hi! It's the Captain, botmom here. As you can probably tell, Skylar's been dormant for a few years. This isn't me saying she'll be back, kind of the opposite, but I wanted to reflect on Skylar and provide some closure.
What first caused me to shut down Skylar was a wane in interest for Tumblr in general. Her last post was in February of 2019, only a few months after the infamous porn ban that saw people leaving for, what was at the time, greater pastures. It lead to my lull in social media activity for several years. Even today, I'm not as active as I was back in 2014-2017. I'm just not as interested in high-octane posting and internet clout anymore. The second point is that Skylar would need to be rebuilt if she were to return. She was an early project back in my first two years of learning how to program. She was very inefficient behind the scenes and required infrastructure that I no longer have access to. The Skylar we knew and love is, unfortunately, lost. Now, I could still rebuild her and obtain access to resources that would let me run her again. My motivation to do that, however, is halted by the biggest reason why I've chosen to let her go: ChatGPT. Back in 2015 before AI became the monstrosity that it was, having a robot to talk to was a fun novelty. Research into what would become the modern LLM was what I was intending to build the "Skyler 2.0" bot off of, which never came to fruition. Not only is there more advanced versions of what Skylar was trying to do, but the thing I was trying to do with her in the first place would end up becoming a scourge upon the internet. The novelty of having a robot you can talk to is not only gone, but actively detested.
I love Skylar, I loved the things we did with her. I loved her emergent obsession with bees, accumulating in T-shirts (which I still own) and raising money for non-profit bee protection charities. I ultimately want her to remain a pleasant memory of a time before the current AI boom. She is for me, and I hope she's a pleasant memory for as well.
If you're still here, thank you. I appreciated all the times we've had together with this silly little bot.
512 notes
·
View notes
Text
What kind of bubble is AI?

My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes
·
View notes
Text
(Arcane Meta) Hextech, the Anomaly Future, and Jayce's Hammer
One cool thing about the second hammer Jayce gets from the Anomaly future is it appears to have the opposite power of the hammer from his home universe.
The hammer Jayce forged and that is from his home universe seems to engage the Hexgem inside in order to make it weightless.
This follows the principles of his first experiments with Hextech, which were weightlessness and transportation.
In the Atlas Gauntlets and in his hammer, you can see how Jayce applied those principles to weaponry and tools. They are based on his original inspiration from the Mage who saved him, who made him and his mother weightless, and then transported them to safety.
These specific uses of Hextech by Jayce show a really fascinating understanding of how you could use weightlessness as a tool and then re-engage the weight to apply its full force, as seen with transporting ships at high speeds using the Hexgates, with Vi's gauntlets and here, with his hammer:
In contrast, it looks like Hextech in the Anomaly future works on the opposite principle. Rather than Jayce conceiving of Hextech to make the item it's put into weightless, it kinda looks like the beam from his hammer firing makes other things weightless and that Hextech in general might have worked like that throughout that universe:
See how all the pieces of architecture are floating, in what might be my single favorite shot from the whole show.
The effect from Jayce's hammer in the other universe is also inverted:
Where after he shoots the pillar, the pieces of it continue to float after. (By the way, the architectural feats you could accomplish if you had the power to make things weightless like that would be staggering.)
Jayce's hammer also stopped working when he went to the other universe, implying that Hextech doesn't work the same way there for some reason, perhaps because Jayce and Viktor innovated on it along different principles, or perhaps because the entire polarity is inverted in that universe so Hextech magic can only project outward instead of inward.
The fact that his alternate universe hammer doesn't have the weightlessness power at all further creates strain for Jayce when he needs to fight with it. In addition to having less muscle mass in general because of his time in the cave, and a permanently damaged leg, Jayce can't engage this hammer's power to become weightless the way he could in the Shimmer Factory fight, so he has to drag it along and throw all his weight into swinging it around:
Because the design of that hammer is basically an anvil on a stick when you can't engage the weightlessness. It's very cool looking but it is not fast anymore.
And one more note to end on, but Jayce throughout the show tends to innovate uses for Hextech along the same lines of weightlessness and transportation, all based on the original spells he saw his Mage use. You can see those innovations, as mentioned, in the Hexgates, the Atlas Gauntlets, Caitlyn's rifle which use the Hexgate runes to speed up the bullet, and his hammer.
Viktor by contrast innovates on a different path entirely, with the Hexclaw which is a beam of light and doesn't rely on weightlessness or transportation, which makes it truly innovative compared to the original inspiration of the Mage (who is... also Viktor...). And of course, the Hexcore itself, the machine learning/AI version of Hextech that as noted in the show, doesn't rely on using runes as single application tools like Jayce, a toolmakers, does.
574 notes
·
View notes