#example outputs
Explore tagged Tumblr posts
Text
Like what you see?
Check our our carrd!
0 notes
Text

#bernie sanders#work week reduction#32-hour work week#overtime pay#productivity#technology#fair labor standards act#international examples#france#norway#denmark#germany#well-being#stress#fatigue#republican senator bill cassidy#small enterprises#job losses#consumer prices#japan#economic output#labor dynamics#artificial intelligence#automation#workforce composition
124 notes
·
View notes
Text
id.v is one of those games where you can say this character is your favorite and not know their lore
#LIKE#I LOVE TRACY BUT I ONLY HAVE#TIP OF THE ICEBERG KNOWLEDGE ABOUT HER LORE#IDV IS ONE OF THOSE GAMES WHERE YOU HAVE TO REALLY FIGHT TI GET THEIR LORE#and you acquire bits and pieces of their lore by doing these “quests” that takes advantage of their gimmick#like tracy for an example. shes a mechanic. u gotta do these challenges surrpunding her robot#and if you achieve it youll get some hints of her lore#I DID NOT BUY TRACY!!!!!#SO I CANNOT GET THOSE ACHIEVEMENTS YET#I KNOW NAIB'S LORE FRONT AND BACK BECAUSE IVE MAINED HIM FOR YEARS#AND READ ALL HIS HINTS OF LORE#ofc this can be avoided by simply reading the wiki but#if you solely focused on the game's output then its really hard to get lore so esaily#~ rambling
10 notes
·
View notes
Text
Sometimes when something is popular enough, the "annoying people saying it's the only good thing" aren't the fandom, they're people who interact with it very little.
Like sorry to bring it up but the majority of people who are in the SynthV fandom are fond of vocal synth, like other characters or vocals or programs, appreciate the history of the medium, have that one UTAU or design they liked as a kid before they learned about vsynth, or at the very least have friends who like other programs.
Those are people who, if they wish a vocalist had gone to SynthV instead, will word it as "SynthV next? 👀" or "aw, I'm disappointed, I wish they had gone to SynthV". Largely.
The people saying "this is shit I don't understand why they didn't go to SynthV" are people who literally are not aware of the popularity and history of other products (let alone aware of relationships between certain brands), therefore people who can be assumed not to be in the fandom. Or at the very least to be new to it.
Beside those being rude because they're rude people, because those people always exist, and they're on every side of the argument; there's a good chance they literally think SynthV is the best and don't understand why a company would go for a worse product. Like they don't have information that could explain it.
#and this speaks to the quality of YAMAHA's output but also the marketing of every brand other than SynthV#as well as SynthV's own#(more western for example)
6 notes
·
View notes
Text
got requested to share my xueyi build so!!! here it is :]
(+ EU server crew - shes on my supports rn if u wanna add me to try her out alongside e1 RM and e0s1 blade)
anyway some notes:
a bit of xueyi 101 to start out jic - the atk orb and break rope are the standard for her; her break effect to dmg% conversion A2 trace makes quantum dmg a joke whereas her sources of atk are very limited. build-wise as this freaky puppetcore weirdgirl crit/break hybrid she wants it all rly; both crit, BE and atk, but prioritizing crit until some sort of 1:2 (guides tend to put 60/120 as the baseline) is key.
S5 aeon is perfectly fine and its what i used to have on her, i pulled indelible promise in order to end the custody battle over aeon between her and DHIL lmao (+ new gacha 4* LCs in hsr are like new 4* charas luckily - indelible promise after its initial release patch in 2.0 is now a permanent offrate on all banners except for beginner, phew). the crit on it is especially nice, rly hope i get spooked with some superimpositions in the future.
she shouldnt be on glamoth anymore really - this showcase is with her on spd boots (as you can see. duh) and she does reach the 135 spd for the 1st glamoth buff requirement when with RM (which is every time i play her) so its fine, but as i swap to atk boots when with sparkle im sure you can see why it would be... suboptimal. problem is. well. look at her rope. still havent managed to roll a comparable salsotto one so 💀 we live with this.
her relics are still a bit scuffed (chest and both her atk and spd boots are. fine. but could be better) but since i often have fx and sparkle patching up her crit its fine for now.
also yes i shouldve unlocked those last few quantum dmg and BE traces ages ago but its not rly that impactful when her dmg% is already as high as it is with the amount of BE i have on her so oh well x)
obviously given her dependence on break xueyis far from an universal dps - i only use her in heavily quantum weak fights (or in SU where u can get blessings to spam her ult for the weakness ignoring attack) but when she gets to shine she shreds So hard i love her sm 🥰🥰
+ heres the atk boots build ig. not that much changing as you can see but ¯\_(ツ)_/¯
#also i should Really mention slash warn how like. xueyi is Easily the most difficult dps to pilot to date lmao like its not even close#being the weird freaky crit/break hybrid she is she uses both break and crit as the sources of her dmg and NEEDS both to work out#xueyi p much uses the weakness bar of the ENEMY as a limited 'resource' to do her dmg the same way someone like DHIL gobbles up all ur SP#& gameplay needs to be navigated very carefully to get the most out of her bc she will NOT be giving u a nice time otherwise JKWJDDJKDWJKWD#xueyi requiring actual thinking is sth i personally love since it makes her feel so much more dynamic and interactive than most carries#but its def sth where ur mileage will vary 💀💀 like.for example. allies stealing even ONE big break from her will massively nerf her output#anyway i wouldnt call her completely eidolon dependent but i also will say that having her e6 p much the entire time ive had her built#will mean ive had a much smoother experience overall w her; e2 e4 and e6 are all meaningful and significant boosts so do keep it in mind#ive seen some insane e0 to e2 showcases as well so its clear that a well piloted xueyi doesnt Need the reduced max karma stacks from e6#but the QoL and ease of use from having 6 karma to work towards as opposed to 8 is undeniably massive#aaanyway thats all im p sure but . feel free to ask more abt her im not the local xueyi tryhard enthusiast here for no reason x)#hsr#gaming tag
4 notes
·
View notes
Text
Learning how much of web design is just going "sure, why not" as I slap some things in a category and make it a flex box and it's working now? It's displaying what I need it to? Adjust the image size? Hmmmmm let's just edit the attributes. Sure why not.
#speculation nation#hfkshfkshfks got the lab done and it's. well i did all the things it told me to do. not 100% sure i did the Right things entirely#but there are like. so many different ways to accomplish similar or even the same sorts of things???#so idk whether the image size adjustment was supposed to happen automatically with a column adjustment thing#that i used the wrong thing to adjust the columns. or like. if that was just an unspoken requirement??#bc the example showed the images changing for screen sizes changing. so i went and manually implemented that.#but im also not entirely sure i did the screen sizes right either lmao. it had it like#well i set up the article vs the aside as two columns via making the main theyre both under into a grid. 2fr 1fr.#so that the article is 2/3rds the space and the aside is 1/3rd the space#but then in the next section it says to established that like. definitively? so i went and just put. 66vw and 33vw for them#as the like. 66 percent of the viewport and 33 percent. which i found is NOT the same as just plain 66% and 33%#bc it didnt look how i needed to with the % but it DID with the vw so. ???? who knows what that actually did. but whatever works i guess#and thus. the spirit of the original post lmfao. 'sure why not.'#oh well im learning. even if i dont entirely know what the fuck im doing all of the time.#so long as my output looks like what the example does and i accomplish all the requirements then oh wellllll
0 notes
Text
A company making wooden wind turbine blades has successfully tested a 50-meter-long prototype that’s set to debut soon in the Indian and European markets.
Last year, the German firm Voodin successfully demonstrated that their laminated-veneer timber blades could be fabricated, adapted, and installed at a lower cost than existing blades, while maintaining performance.
Now, Voodin has announced a partnership with the Indian wind company Senvion to supply its 4.2-megawatt turbines with these wooden blades for another trial run.
Wind power has accumulated more than a few demerit points for several shortfalls in the overall industry of this fossil-fuel alternative.
Some of these, such as the impact on bird life, are justified, but none more so than the fact that the turbine blades are impossible or nearly impossible to recycle, and that they need to be changed every 25 years.
Wind turbine blades are made from a mixture of glass and carbon fiber heated together with sticky epoxy resin, and these materials can’t be separated once combined, which means they go into landfills or are incinerated when they become too battered to safely operate.
GNN has reported that folks will occasionally find second-life value in these giant panels, for example in Denmark where they are turned into bike shelters. In another instance, they’re being used as pedestrian bridges.
But there are way more wind turbine blades being made every year than pedestrian bridges and bike shelters, making the overall environmental impact of wind power not all green.
“At the end of their lifecycle, most blades are buried in the ground or incinerated. This means that—at this pace—we will end up with 50 million tonnes of blade material waste by 2050,” Voodin Blade Technology’s CEO. Mr. Siekmann said recently. “With our solution, we want to help green energy truly become as green as possible.”
The last 15 years have seen rapid growth in another industry called mass timber. This state-of-the-art manufacturing technique sees panels of lumber heat-pressed, cross-laminated, and glued into a finished product that’s being used to make skyscrapers, airports, and more.
At the end of the day though, mass timber products are still wood, and can be recycled in a variety of ways.
“The blades are not only an innovative technological advancement but a significant leap toward sustainable wind production,” said Siekmann, adding that this isn’t a case of pay more to waste less; the blades cost around 20% less than carbon fiber.
Additionally, the added flexibility of wooden blades should allow for taller towers and longer blades, potentially boosting the output of turbine by accessing higher wind speeds.
Now partnered with Voodin, Senvion will begin feasibility analysis in the next few months, before official testing begins around 2027.
#good news#wind turbines#wind power#environmentalism#science#environment#fossil fuel alternatives#mass timber#recycling
11K notes
·
View notes
Text

Okay, let's think this through using the basic science of electromagnetism.

This is the EM spectrum. This spectrum is basically a depiction of how energetic the waves are. High energy is on the left. Low energy is on the right.
Mobile phones receive and transmit microwaves. Microwaves are very low energy. Which means it takes a lot of effort for them to be harmful.
A real life example would be a microwave oven. It takes a thousand watts before microwaves are able to cook our food. And it still takes a while before that energy can cook the center of a Hot Pocket.
A mobile phone's max output is 3 watts, but if it is close to a tower, it could operate as low as 0.001 watts.
UV light is much higher energy. The sun outputs a lot of UV light. You can lay out in the sun for a long time and it will give you a hell of a sunburn, but it still does not have enough energy to penetrate past a few layers of skin.
Think about that.
The power of the sun. Blasting you with UV for hours. 1000 watts per square meter. And it doesn't even come close to reaching your brain.
Even if your brain was exposed and you blasted it directly with 3 watts of microwaves for hours and hours, it would not have the energy required to do any significant damage.
This is stupid. We can literally test and measure this.
2K notes
·
View notes
Note
With DND 5e being set up to cause DM burnout, can you give examples of tabletop systems that facilitate easy DMing? I love running a tabletop game but don't have the time to deal with 5e or homebrew anymore.
(With reference to this post here.)
This is an area where you're going to get a lot of bad advice, because there's no such thing as a tabletop RPG that's "easy to GM" in the abstract. Some systems make greater or lesser demands of the GM's time and skill, but the reason that Dungeons & Dragons has a massive GM burnout problem is a bit more subtle than that – indeed, D&D's GM burnout problem is considerably worse than that of many games whose procedures of play place much greater demands on the GM!
It boils down to the fact that games are opinionated. Even a very simple set of rules contains a vast number of baked-in assumptions about how the game ought to be played; in the case of tabletop RPGs, those baked-in assumptions include assumptions about what kinds of stories the game ought to be used to tell. The players of any given group, of course, also have assumptions – some explicit, many unexamined – about how the game's story ought to go. It's rare that these two sets of assumptions will perfectly agree.
Fortunately, perfect agreement isn't necessary, because tabletop RPGs aren't computer games, and it's always possible to tweak the outputs of the rules on the fly to better suit the desired narrative experience. In conventional one-GM-many-players games like D&D, this responsibility for monitoring and adjusting the outputs of the rules so that they're compatible with the narrative space the group wishes to explore falls principally on the GM.
Now, here's where the trouble starts: the larger the disconnect between the story the rules want to produce and the narrative space the group wants to explore, the more work the GM in a conventional one-GM-many-players context needs to do in order to close that gap. If the disconnect is large enough, the GM ends up spending practically all of their time babysitting the outputs of the rules, at the expense of literally every other facet of their responsibilities.
(Conversely, if that gap is large and isn't successfully closed, you can end up with a situation where engaging with the rules and engaging with the narrative become mutually exclusive activities. This is where we get daft ideas like "combat" and "roleplaying" being opposites – which is nonsense, of course, but it's persuasive nonsense if you've never experienced a game where the rules agree with you about what kind of story you should be telling.)
And here's where the problem with Dungeons & Dragons in particular arises. The rules of D&D aren't especially more opinionated than those of your average tabletop RPG; however, the game has developed a culture of play that's allergic to actually acknowledging this. There are several legs to this, including:
a text which makes claims about the game's supported modes of play that are far broader than what the rules in fact support;
a body of received wisdom about GMing best practices which consists mostly of advice on how to close the gap between the rules' assumptions and the players' expectations (but refuses to admit that this is what it's doing);
a player culture which has become increasingly hostile to players learning or knowing the rules, and positions any expectation that players should learn the rules as a form of "gatekeeping"; and
a propensity to treat a very high level of GMing skill as an entry-level expectation.
Taken together, all this produces a situation where, when the rules and the group disagree about how the game's story ought to go, the players don't experience it as a problem with the rules: they experience it as a problem with the GM. A lot of GMs even buy into this perception themselves, which is how you end up with GM advice forums overflowing with people telling novice GMs that they're morally bad people for being unprepared to tackle very advanced GMing challenges right from the jump.
(At this point, one may wonder: why on Earth would a game develop this sort of culture of play in the first place? Who benefits? Well, what we're looking at in practice is a culture of play which treats novice and casual GMs as a disposal resource whose purpose is to maximise the number of people playing Dungeons & Dragons. Follow the money!)
So, after all of that, the short answer is that there isn't a specific magic-bullet solution to avoiding D&D's GM burnout problem – or, at least, not one that operates at the level of the rules, because there's no particular thing that D&D as a system is doing "wrong" that produces this outcome; the problem operates almost entirely at the play culture level.
In practice, two things need to happen:
Placing a greater expectation on the players to learn and understand the game's rules; and
Selecting a system where the gap between the story the rules want to produce and the narrative space the group wants to explore is small.
It's that second one that's the real trick. In order to minimise that gap, we need to know what kind of narrative space your group wants to explore, and that might not be something you have a good answer to if you don't have good lines of communication with your players.
(As an aside, there's a good chance that we're going to see dipsticks cropping up in the notes insisting that their favourite system short-circuits this problem by being perfectly universal and having no baked-in narrative assumptions. These people are lying to you, and lending credence to the idea that there's any such thing as a universal RPG is a big part of how we got into this mess in the first place!)
2K notes
·
View notes
Text
Like what you see?
Check out our carrd!
0 notes
Text
re: last rb, I think the takeaway is not “wow this person is so wrong, Harry Du Bois isn’t a generic white man he’s actually interesting so he’s not an example of a generic white guy character” but that perhaps the writers of the game were making an intentional decision about him being a middle aged white police officer when they wrote the story. like the limitation of dismissing his character as “just another white guy protag” is because it treats ‘representation’ as essentially a doll dress-up game where identity is just a series of discrete inert properties that you plaster onto an already-complete narrative for the purposes of census demographic reflection - that the idea that Harry is a white police officer only because white creators view themselves as default protagonists of all stories and his character is a simple mindless output of that - and not an active component of narrative decision-making. especially disco elysium of all games lol. there’s a fascist named measurehead in it, it’s not exactly subtle about its desire to engage with white supremacy, and I think the game is making a statement about that by forcing you to play as a white cop. and like you can object to those narrative choices and/or the quality of their execution, but Harry could be as ‘generic’ as possible and I don’t think that would make that “just another white guy” critique any more substantive
2K notes
·
View notes
Text
Our Stance On Gen-AI
This year, for the first time, we've had a couple of reports from bidders that the FTH fanworks they received were produced using generative AI. For that reason, we've decided that it's important that we lay out a specific, concrete policy going forward.
Generative AI tools are not welcome here.
Non-exhaustive list of examples:
image generators like Imagen, Midjourney, and similar
video generators like Sora, Runway, and similar
LLMs like ChatGPT and similar
audio generators like ElevenLabs, MusicLM, and similar
Participants found to have used generative AI to produce a fanwork, in part or in whole, for their bidder(s) will be permanently banned from participating in future iterations of Fandom Trumps Hate.
Why?
We understand that there can be contentious debate around the use of generative AI, we know individual people have their own reasons for being in favor of it, and we recognize that many people may simply be unaware that these tools come with any negative impacts at all. Regardless, we are firm in our stance on this for the following (non-exhaustive) list of key reasons in no particular order:
negative, unregulated environmental impact
Over the years, you may have noticed that we’ve supported multiple environmental organizations doing important work to combat climate change, preserve wildlife, and advocate for renewable and sustainable energy policy changes. Generative AI tools produce a startling amount of e-waste, can require massive amounts of storage space and computational power, and are a (currently unregulated) drain on natural resources. Using these tools to produce a fanwork flies in the face of every environmental organization we have supported to date.
plagiarism and lack of artistic integrity
Most if not all generative AI models are trained on some amount of stolen work (across various mediums). As a result, any output generated by these models is at worst plagiarized and at best extremely derivative and unoriginal. In our opinion, using generative AI tools to produce a fanwork demonstrates a lack of care for your own craft, a lack of respect for the work of other creators, and a lack of respect for your bidder and your commitment to them.
undermining our community building impact
One of the best things to come out of the auction every year—we can't even call it a side benefit, because it's so central to us—is that bidders and creators form collaborative relationships which sometimes even turn into friendship. Using generative AI undermines that trust and collaboration.
undermining the value of participating as a creator
Bidders participate in Fandom Trumps Hate for the opportunity to prompt YOU to create a fanwork for them, in YOUR style with YOUR specific skill set. Any potential bidder is perfectly capable of dropping a prompt into a generative AI tool on their own time, if they wish. We hope all creators sign up with the aim to play a role more significant than “unnecessary middleman.”
In general, we try to be as flexible as we can in our policies to allow for the best experience possible for all Fandom Trumps Hate participants. This, however, is something we are not willing to be flexible on. We realize this may seem unusually rigid, but we ask that you trust we have given this serious consideration and respect that while we are willing to answer clarifying questions, we are not open to debate on this topic.
1K notes
·
View notes
Text
The global food economy is massively inefficient. The need for standardized products means tons of edible food are destroyed or left to rot. This is one reason more than one-third of the global food supply is wasted or lost; for the U.S., the figure is closer to one-half. The logic of global trade results in massive quantities of identical products being simultaneously imported and exported—a needless waste of fossil fuels and an enormous addition to greenhouse gas emissions. In a typical year, for example, the U.S. imports more than 400,000 tons of potatoes and 1 million tons of beef while exporting almost the same tonnage. The same is true of many other food commodities and countries. The same logic leads to shipping foods worldwide simply to reduce labor costs for processing. Shrimp harvested off the coast of Scotland, for example, are shipped 6,000 miles to Thailand to be peeled, then shipped 6,000 miles back to the UK to be sold to consumers. The supposed efficiency of monocultural production is based on output per unit of labor, which is maximized by replacing jobs with chemical- and energy-intensive technology. Measured by output per acre, however—a far more relevant metric—smaller-scale farms are typically 8 to 20 times more productive.
5 November 2024
1K notes
·
View notes
Video
youtube
File Upload Download Microservice in Nodejs Javascript | API for Multipa... Full Video Link https://youtu.be/Kyi6sYj9ImgHello friends, new #video on #nodejs #javascript #microservices for #filedownload and #fileupload #multer #multipart #formdata #multipartformdata #javascript #projeect #application #tutorial #examples is published on #codeonedigest #youtube channel. @java #java #aws #awscloud @awscloud @AWSCloudIndia #salesforce #Cloud #CloudComputing @YouTube #youtube #azure #msazure #codeonedigest @codeonedigest #nodejs #nodejs #javascript #microservices #nodejstutorial #learnnodejs #node.js #nodejsfileupload #nodejsmulter #nodejsmulterfileupload #nodejsmulterimageupload #nodejsmicroservicesfileupload #nodejsmicroservicesfiledownload #nodejsapifileupload #nodejsapifiledownload #nodejsfileuploadapi #nodejsfileuploadusingmulter #nodejsfiledownload #nodejsfiledownloadapi #nodejsdownloadfilefromserver #nodejsmultipartfileupload #multerinnodejs
#youtube#multer#nodejs microservice#nodejs microservice mongodb#nodejs microservice architecture#nodejs microservice example#nodejs microservices mongodb#nodejs api#file upload#file download#file iinput output#javascript api
1 note
·
View note
Note
hey what’s up, i think you’re pretty cool but disagree with you on the whole ai can make art thing. to me, without the purpose from an actual person creating the piece, it’s not art but an image; as all human art has purpose. some driving factor in a work, compared to a program which purely creates the prompt without further intention. i was wondering what your insight on this is? either way, hope you have a great day
well, first of all, does art require 'purpose'? there's this view of art which has very much calcified in "anti-AI" rhetoric, that art is some linear process of communication from one individual to another: an Artist puts some Meaning into a unit of Art, which others can then view to Recieve that Meaning. you can hold this view, but i don't! i'm much more of a stuart hall-head on this, i think that there is no such transfusion of Intent and that rather the 'meaning' of a piece is something that exists only in the interplay between text and reader. reading is an active, interpretative process of decoding, not a passive absorptive one. so i dispute, firstly, that 'purpose' is to begin with a necessary or even imporant element of art.
moreover i think this argument rests on a very arbitrarily selective view of what counts as "an actual person creating the piece" -- 'the prompt' is, itself, an obvious artistic contribution, a place where an artist can impart huge amounts of direction, vision, and so on. in fact, i completely reject the claim of both the technology's salesman and its biggest detractors that genAI "makes art" -- to quote kerry mitchell's fractal art manifesto: "Turn a computer on and leave it alone for an hour. When you come back, no art will have been generated." in the past, i've posed questions about generative art pieces to demonstrate this
secondly, of course, the process does not end after image generation from prompt for serious generative artists--the ones who are serious about the artform (rather than tech guys trying to do marketing for the Magical Art Box) frequently iterate and iterate, generating a range of iterations and then picking one to iterate on further, so on and so forth, until the final image they choose to share is one that contains within it the traces of a thousand discrete choices on behalf of the artist (two pretty good explanations of this from people who actually do this stuff can be found here and here)
third and finally, that very choice to share the image is itself an artistic decision! we (and by we, i mean, anyone who cares about what art is) have been talking about this since fountain -- display is a form of artistic intent, taking something and putting it forward and saying 'this is art' is in and of itself an artistic decision being made even if the thing itself is unaltered: see, for example, the entire discipline of 'found art'. once someone challenged me, yknow, "if you did a google search, would that be art?" and my answer to that is, if you screenshot that google search and share it as art, then yes, resoundingly yes! curation and presentation recontextualizes objects, turning them into rich texts through the simple process of reframing them. so even if you granted that genAI output is inherently random computer noise (i don't, of course) -- i still think that the act of presenting it as art makes it so.
since i assume you're not familiar with anything interesting in the medium, because the most popular stuff made with genAI is pure "lo-fi girl in ghibli style" type slop, let me share some genAI pieces (or genAI-influenced pieces) that i think are powerful and interesting:
the meat gala, rob sheridan (warning: body horror!)
secret horses (does anyone know the original source on this?)
infinite art machine, reachartwork
ethinically ambigaus, james tamagotchi
mcdonalds simpsons porn room, wayneradiotv
software greatman, everything everything (the music is completely made by the band, but genAI was partially responsible for the lyrics -- including the title and the several interesting pseudo-kennings)
i want a love like this music video, everything everything
cocaine is the motor of the modern world, bots of new york
poison the walker, roborosewatermasters (here's my analysis posts on it too)
not all of these were necessarily intended as art: but i think they are rich and fascinating texts when read that way -- they have certainly impacted me as much as any art has.
anyways, whether you agree or not, i hope this gives you some stuff to think about, thanks for sharing your thoughts :)
587 notes
·
View notes
Text
The conversation around AI is going to get away from us quickly because people lack the language to distinguish types of AI--and it's not their fault. Companies love to slap "AI" on anything they believe can pass for something "intelligent" a computer program is doing. And this muddies the waters when people want to talk about AI when the exact same word covers a wide umbrella and they themselves don't know how to qualify the distinctions within.
I'm a software engineer and not a data scientist, so I'm not exactly at the level of domain expert. But I work with data scientists, and I have at least rudimentary college-level knowledge of machine learning and linear algebra from my CS degree. So I want to give some quick guidance.
What is AI? And what is not AI?
So what's the difference between just a computer program, and an "AI" program? Computers can do a lot of smart things, and companies love the idea of calling anything that seems smart enough "AI", but industry-wise the question of "how smart" a program is has nothing to do with whether it is AI.
A regular, non-AI computer program is procedural, and rigidly defined. I could "program" traffic light behavior that essentially goes { if(light === green) { go(); } else { stop();} }. I've told it in simple and rigid terms what condition to check, and how to behave based on that check. (A better program would have a lot more to check for, like signs and road conditions and pedestrians in the street, and those things will still need to be spelled out.)
An AI traffic light behavior is generated by machine-learning, which simplistically is a huge cranking machine of linear algebra which you feed training data into and it "learns" from. By "learning" I mean it's developing a complex and opaque model of parameters to fit the training data (but not over-fit). In this case the training data probably includes thousands of videos of car behavior at traffic intersections. Through parameter tweaking and model adjustment, data scientists will turn this crank over and over adjusting it to create something which, in very opaque terms, has developed a model that will guess the right behavioral output for any future scenario.
A well-trained model would be fed a green light and know to go, and a red light and know to stop, and 'green but there's a kid in the road' and know to stop. A very very well-trained model can probably do this better than my program above, because it has the capacity to be more adaptive than my rigidly-defined thing if the rigidly-defined program is missing some considerations. But if the AI model makes a wrong choice, it is significantly harder to trace down why exactly it did that.
Because again, the reason it's making this decision may be very opaque. It's like engineering a very specific plinko machine which gets tweaked to be very good at taking a road input and giving the right output. But like if that plinko machine contained millions of pegs and none of them necessarily correlated to anything to do with the road. There's possibly no "if green, go, else stop" to look for. (Maybe there is, for traffic light specifically as that is intentionally very simplistic. But a model trained to recognize written numbers for example likely contains no parameters at all that you could map to ideas a human has like "look for a rigid line in the number". The parameters may be all, to humans, meaningless.)
So, that's basics. Here are some categories of things which get called AI:
"AI" which is just genuinely not AI
There's plenty of software that follows a normal, procedural program defined rigidly, with no linear algebra model training, that companies would love to brand as "AI" because it sounds cool.
Something like motion detection/tracking might be sold as artificially intelligent. But under the covers that can be done as simply as "if some range of pixels changes color by a certain amount, flag as motion"
2. AI which IS genuinely AI, but is not the kind of AI everyone is talking about right now
"AI", by which I mean machine learning using linear algebra, is very good at being fed a lot of training data, and then coming up with an ability to go and categorize real information.
The AI technology that looks at cells and determines whether they're cancer or not, that is using this technology. OCR (Optical Character Recognition) is the technology that can take an image of hand-written text and transcribe it. Again, it's using linear algebra, so yes it's AI.
Many other such examples exist, and have been around for quite a good number of years. They share the genre of technology, which is machine learning models, but these are not the Large Language Model Generative AI that is all over the media. Criticizing these would be like criticizing airplanes when you're actually mad at military drones. It's the same "makes fly in the air" technology but their impact is very different.
3. The AI we ARE talking about. "Chat-gpt" type of Generative AI which uses LLMs ("Large Language Models")
If there was one word I wish people would know in all this, it's LLM (Large Language Model). This describes the KIND of machine learning model that Chat-GPT/midjourney/stablediffusion are fueled by. They're so extremely powerfully trained on human language that they can take an input of conversational language and create a predictive output that is human coherent. (I am less certain what additional technology fuels art-creation, specifically, but considering the AI art generation has risen hand-in-hand with the advent of powerful LLM, I'm at least confident in saying it is still corely LLM).
This technology isn't exactly brand new (predictive text has been using it, but more like the mostly innocent and much less successful older sibling of some celebrity, who no one really thinks about.) But the scale and power of LLM-based AI technology is what is new with Chat-GPT.
This is the generative AI, and even better, the large language model generative AI.
(Data scientists, feel free to add on or correct anything.)
3K notes
·
View notes