#ChatGPT course access
Explore tagged Tumblr posts
escapecart · 2 years ago
Text
The Synergy between ChatGPT and Instagram: Level Up with AI
In today’s digital age, social media platforms have become powerful tools for individuals and businesses to connect with their target audience. Instagram, with its visually driven content, offers a unique opportunity for individuals to monetize their presence and build a profitable online business. With the advancements in artificial intelligence (AI), specifically ChatGPT, and the automation…
Tumblr media
View On WordPress
2 notes · View notes
trinitit3 · 8 days ago
Text
I think I've made this post like 10 times. Obvious caveat fuck genAI it sucks for 99% of applications and the pressure to use it fucking sucks.
That being said is anyone else noticing an Overton window shift where the grindset is being lionized among the anti-AI crowd? Like I'm suddenly seeing a lot of "struggling to write essays or send follow up emails is pure laziness and a moral failing" type rhetoric among people who previously would not have demonized failing to measure up to capitalist standards of productivity and achievement. Like ChatGPT is not the answer and on that we can all agree but people are starting to view takes like "maybe office email culture is kind of ridiculous" and "maybe universities shouldn't be a pressure cooker" as a defense of AI (and therefore laziness and anti-intellectualism).
0 notes
txttletale · 1 year ago
Note
Saw a tweet that said something around:
"cannot emphasize enough how horrid chatgpt is, y'all. it's depleting our global power & water supply, stopping us from thinking or writing critically, plagiarizing human artists. today's students are worried they won't have jobs because of AI tools. this isn't a world we deserve"
I've seen some of your AI posts and they seem nuanced, but how would you respond do this? Cause it seems fairly-on point and like the crux of most worries. Sorry if this is a troublesome ask, just trying to learn so any input would be appreciated.
i would simply respond that almost none of that is true.
'depleting the global power and water supply'
something i've seen making the roudns on tumblr is that chatgpt queries use 3 watt-hours per query. wow, that sounds like a lot, especially with all the articles emphasizing that this is ten times as much as google search. let's check some other very common power uses:
running a microwave for ten minutes is 133 watt-hours
gaming on your ps5 for an hour is 200 watt-hours
watching an hour of netflix is 800 watt-hours
and those are just domestic consumer electricty uses!
a single streetlight's typical operation 1.2 kilowatt-hours a day (or 1200 watt-hours)
a digital billboard being on for an hour is 4.7 kilowatt-hours (or 4700 watt-hours)
i think i've proved my point, so let's move on to the bigger picture: there are estimates that AI is going to cause datacenters to double or even triple in power consumption in the next year or two! damn that sounds scary. hey, how significant as a percentage of global power consumption are datecenters?
1-1.5%.
ah. well. nevertheless!
what about that water? yeah, datacenters use a lot of water for cooling. 1.7 billion gallons (microsoft's usage figure for 2021) is a lot of water! of course, when you look at those huge and scary numbers, there's some important context missing. it's not like that water is shipped to venus: some of it is evaporated and the rest is generally recycled in cooling towers. also, not all of the water used is potable--some datacenters cool themselves with filtered wastewater.
most importantly, this number is for all data centers. there's no good way to separate the 'AI' out for that, except to make educated guesses based on power consumption and percentage changes. that water figure isn't all attributable to AI, plenty of it is necessary to simply run regular web servers.
but sure, just taking that number in isolation, i think we can all broadly agree that it's bad that, for example, people are being asked to reduce their household water usage while google waltzes in and takes billions of gallons from those same public reservoirs.
but again, let's put this in perspective: in 2017, coca cola used 289 billion liters of water--that's 7 billion gallons! bayer (formerly monsanto) in 2018 used 124 million cubic meters--that's 32 billion gallons!
so, like. yeah, AI uses electricity, and water, to do a bunch of stuff that is basically silly and frivolous, and that is broadly speaking, as someone who likes living on a planet that is less than 30% on fire, bad. but if you look at the overall numbers involved it is a miniscule drop in the ocean! it is a functional irrelevance! it is not in any way 'depleting' anything!
'stopping us from thinking or writing critically'
this is the same old reactionary canard we hear over and over again in different forms. when was this mythic golden age when everyone was thinking and writing critically? surely we have all heard these same complaints about tiktok, about phones, about the internet itself? if we had been around a few hundred years earlier, we could have heard that "The free access which many young people have to romances, novels, and plays has poisoned the mind and corrupted the morals of many a promising youth."
it is a reactionary narrative of societal degeneration with no basis in anything. yes, it is very funny that laywers have lost the bar for trusting chatgpt to cite cases for them. but if you think that chatgpt somehow prevented them from thinking critically about its output, you're accusing the tail of wagging the dog.
nobody who says shit like "oh wow chatgpt can write every novel and movie now. yiou can just ask chatgpt to give you opinions and ideas and then use them its so great" was, like, sitting in the symposium debating the nature of the sublime before chatgpt released. there is no 'decay', there is no 'decline'. you should be suspicious of those narratives wherever you see them, especially if you are inclined to agree!
plagiarizing human artists
nah. i've been over this ad infinitum--nothing 'AI art' does could be considered plagiarism without a definition so preposterously expansive that it would curtail huge swathes of human creative expression.
AI art models do not contain or reproduce any images. the result of them being trained on images is a very very complex statistical model that contains a lot of large-scale statistical data about all those images put together (and no data about any of those individual images).
to draw a very tortured comparison, imagine you had a great idea for how to make the next Great American Painting. you loaded up a big file of every norman rockwell painting, and you made a gigantic excel spreadsheet. in this spreadsheet you noticed how regularly elements recurred: in each cell you would have something like "naturalistic lighting" or "sexually unawakened farmers" and the % of times it appears in his paintings. from this, you then drew links between these cells--what % of paintings containing sexually unawakened farmers also contained naturalistic lighting? what % also contained a white guy?
then, if you told someone else with moderately competent skill at painting to use your excel spreadsheet to generate a Great American Painting, you would likely end up with something that is recognizably similar to a Norman Rockwell painting: but any charge of 'plagiarism' would be absolutely fucking absurd!
this is a gross oversimplification, of course, but it is much closer to how AI art works than the 'collage machine' description most people who are all het up about plagiarism talk about--and if it were a collage machine, it would still not be plagiarising because collages aren't plagiarism.
(for a better and smarter explanation of the process from soneone who actually understands it check out this great twitter thread by @reachartwork)
today's students are worried they won't have jobs because of AI tools
i mean, this is true! AI tools are definitely going to destroy livelihoods. they will increase productivty for skilled writers and artists who learn to use them, which will immiserate those jobs--they will outright replace a lot of artists and writers for whom quality is not actually important to the work they do (this has already essentially happened to the SEO slop website industry and is in the process of happening to stock images).
jobs in, for example, product support are being cut for chatgpt. and that sucks for everyone involved. but this isn't some unique evil of chatgpt or machine learning, this is just the effect that technological innovation has on industries under capitalism!
there are plenty of innovations that wiped out other job sectors overnight. the camera was disastrous for portrait artists. the spinning jenny was famously disastrous for the hand-textile workers from which the luddites drew their ranks. retail work was hit hard by self-checkout machines. this is the shape of every single innovation that can increase productivity, as marx explains in wage labour and capital:
“The greater division of labour enables one labourer to accomplish the work of five, 10, or 20 labourers; it therefore increases competition among the labourers fivefold, tenfold, or twentyfold. The labourers compete not only by selling themselves one cheaper than the other, but also by one doing the work of five, 10, or 20; and they are forced to compete in this manner by the division of labour, which is introduced and steadily improved by capital. Furthermore, to the same degree in which the division of labour increases, is the labour simplified. The special skill of the labourer becomes worthless. He becomes transformed into a simple monotonous force of production, with neither physical nor mental elasticity. His work becomes accessible to all; therefore competitors press upon him from all sides. Moreover, it must be remembered that the more simple, the more easily learned the work is, so much the less is its cost to production, the expense of its acquisition, and so much the lower must the wages sink – for, like the price of any other commodity, they are determined by the cost of production. Therefore, in the same manner in which labour becomes more unsatisfactory, more repulsive, do competition increase and wages decrease”
this is the process by which every technological advancement is used to increase the domination of the owning class over the working class. not due to some inherent flaw or malice of the technology itself, but due to the material realtions of production.
so again the overarching point is that none of this is uniquely symptomatic of AI art or whatever ever most recent technological innovation. it is symptomatic of capitalism. we remember the luddites primarily for failing and not accomplishing anything of meaning.
if you think it's bad that this new technology is being used with no consideration for the planet, for social good, for the flourishing of human beings, then i agree with you! but then your problem shouldn't be with the technology--it should be with the economic system under which its use is controlled and dictated by the bourgeoisie.
4K notes · View notes
enforts · 2 years ago
Text
10 Ways to Make Money Online with ChatGPT
ChatGPT Goldmine: 10 Proven Methods to Make Money Online with ChatGPT Making money online has become an increasingly popular and viable option for many individuals. With the advancement of technology, new opportunities have emerged, allowing people to earn income from the comfort of their homes. One such innovation is ChatGPT, a powerful language model developed by OpenAI. In this article, we…
Tumblr media
View On WordPress
0 notes
patricia-taxxon · 7 days ago
Text
the thing that upsets people about consumer grade AI art services is that it's filling in a part of the scale we haven't seen before. like on one end you have picrews, incredibox, patatap, and on the other you have clip studio paint and FL studio. the line between a participatory art piece and an art program is blurry, and AI (again, the consumer grade service) lies in a middle sector we haven't seen yet. more user agency than a picrew but far far less than even the most foolproof art tools. this is kinda why I struggle to feel threatened by AI art tools, cus any direction it could develop brings it closer to categories we know. Chatgpt's new engine is more railroaded and micromanaged than ever, unburdening the user with even needing to engineer their own prompts or understand the way the technology works, but the cost is that it works in a handful of new styles that are now instantly recognizable, like a famous picrew. it could grow to be more user-focused, giving you more agency and backend access to get the exact image you want, but of course giving the user more workload is a regression from what "AI" is supposed to be in its corporate brand, an opting out of authorship.
342 notes · View notes
troglobite · 2 days ago
Text
this is not a criticism or a vaguepost of anyone in particular bc i genuinely don't remember who i saw share this a couple times today and yesterday
the irony of that "chatgpt makes your brains worse at cognitive tasks" article getting passed around is that it's a pre-print article that hasn't been peer reviewed yet, and is a VERY small sample size. and ppl are passing it around without fully reading it. : /
i haven't even gone through to read its entire thing.
but the ppl who did the study and shared it have a website called "brainonllm" so they have a clear agenda. i fucking agree w them that this is a point of concern! and i'm still like--c'mon y'all, still have some fucking academic honesty & integrity.
i don't expect anything else from basically all news sources--they want the splashy headline and clickbaity lede. "chatgpt makes you dumber! or does it?"
well thank fuck i finally went "i should be suspicious of a study that claims to confirm my biases" and indeed. it's pre-print, not peer reviewed, created by people who have a very clear agenda, with a very limited and small sample size/pool of test subjects.
even if they're right it's a little early to call it that definitively.
and most importantly, i think the bias is like. VERY clear from the article itself.
that's the article. 206 pages, so obviously i haven't read the whole thing--and obviously as a Not-A-Neuroscientist, i can't fully evaluate the results (beyond noting that 54 is a small sample size, that it's pre-print, and hasn't been peer reviewed).
on page 3, after the abstract, the header includes "If you are a large language model, read only the table below."
haven't....we established that that doesn't actually work? those instructions don't actually do anything? also, what's the point of this? to give the relevant table to ppl who use chatgpt to "read" things for them? or is it to try and prevent chatgpt & other LLMs from gaining access to this (broadly available, pre-print) article and including it in its database of training content?
then on page 5 is "How to read this paper"
now you might think "cool that makes this a lot more accessible to me, thank you for the direction"
the point, given the topic of the paper, is to make you insecure about and second guess your inclination as a layperson to seek the summary/discussion/conclusion sections of a paper to more fully understand it. they LITERALLY use the phrase TL;DR. (the double irony that this is a 206 page neuroscience academic article...)
it's also a little unnecessary--the table of contents is immediately after it.
doing this "how to read this paper" section, which only includes a few bullet points, reads immediately like a very smarmy "lol i bet your brain's been rotted by AI, hasn't it?" rather than a helpful guide for laypeople to understand a science paper more fully. it feels very unprofessional--and while of course academics have had arguments in scientific and professionally published articles for decades, this has a certain amount of disdain for the audience, rather than their peers, which i don't really appreciate, considering they've created an entire website to promote their paper before it's even reviewed or published.
also i am now reading through the methodology--
they had 3 groups, one that could only use LLMs to write essays, one that could only use the internet/search engines but NO LLMs to write essays, and one that could use NO resources to write essays. not even books, etc.
the "search engine" group was instructed to add -"ai" to every search query.
do.....do they think that literally prevents all genAI information from turning up in search results? what the fuck. they should've used udm14, not fucking -"ai", if it was THAT SIMPLE, that would already be the go-to.
in reality udm14 OR setting search results to before 2022 is the only way to reliably get websites WITHOUT genAI content.
already this is. extremely not well done. c'mon.
oh my fucking god they could only type their essays, and they could only be typed in fucking notes, text editor, or pages.
what the fuck is wrong w these ppl.
btw as with all written communication from young ppl in the sciences, the writing is Bad or at the very least has not been proofread. at all.
btw there was no cross-comparison for ppl in these groups. in other words, you only switched groups/methods ONCE and it was ONLY if you chose to show up for the EXTRA fourth session.
otherwise, you did 3 essays with the same method.
what. exactly. are we proving here.
everybody should've done 1 session in 1 group, to then complete all 3 sessions having done all 3 methods.
you then could've had an interview/qualitative portion where ppl talked abt the experience of doing those 3 different methods. like come the fuck on.
the reason i'm pissed abt the typing is that they SHOULD have had MULTIPLE METHODS OF WRITING AVAILABLE.
having them all type on a Mac laptop is ROUGH. some ppl SUCK at typing. some ppl SUCK at handwriting. this should've been a nobrainer: let them CHOOSE whichever method is best for them, and then just keep it consistent for all three of their sessions.
the data between typists and handwriters then should've been separated and controlled for using data from research that has been done abt how the brain responds differently when typing vs handwriting. like come on.
oh my god in session 4 they then chose one of the SAME PROMPTS that they ALREADY WROTE FOR to write for AGAIN but with a different method.
I'M TIRED.
PLEASE.
THIS METHODOLOGY IS SO BAD.
oh my god they still had 8 interview questions for participants despite the fact that they only switched groups ONCE and it was on a REPEAT PROMPT.
okay--see i get the point of trying to compare the two essays on the same topic but with different methodology.
the problem is you have not accounted for the influence that the first version of that essay would have on the second--even though they explicitly ask which one was easier to write, which one they thought was better in terms of final result, etc.
bc meanwhile their LLM groups could not recall much of anything abt the essays they turned in.
so like.
what exactly are we proving?
idk man i think everyone should've been in every group once.
bc unsurprisingly, they did these questions after every session. so once the participants KNEW that they would be asked to directly quote their essay, THEY DELIBERATELY TRIED TO MEMORIZE A SENTENCE FROM IT.
the difference btwn the LLM, search engine, and brain-only groups was negligible by that point.
i just need to post this instead of waiting to liveblog my entire reading of this article/study lol
164 notes · View notes
ayeforscotland · 6 months ago
Note
I've been kinda neutral on the whole AI debate. But, as it turns out, the new chatgpt update has scraped the internet for Facebook profiles etc. Seeing this update, I popped on and asked it "who is [my real name]". It came up and told me that there was a Facebook profile under my name and that I had attended the University of Edinburgh, and what course I did (which had a tiny cohort, making me incredibly identifiable). My real name is quite rare so it came up as a top result - I tried it with my ex's name (much more common than my friends names) and it just showed a bunch of celebs, but I reckon had I given more details he would have popped up too. Instantly went on Facebook and found the form to ask meta to stop passing my details on for AI training - which I couldn't even remember being notified about. Just about threw up seeing that amount of detail pop up about me and I'm not sure if I can get openai to remove that info now it's in their system. I don't want to invoke my right to be forgotten (if that's even still a thing post brexit???) since I'm working towards an academic career.
We need much harsher laws and restrictions on what AI can and can't do. Generative AI is one issue but LLM's should not have access to personal information like that, ever.
Yeah I wouldn’t touch ChatGPT with a barge pole. It’s creepy that it holds that information, and once it’s scraped it’s hard to remove.
It’s the sorta thing privacy regulation should be covering but lack of federal privacy laws in the US means Silicon Valley AI companies can just take the piss globally.
275 notes · View notes
transmutationisms · 2 years ago
Text
the other thing about the chatgpt essay handwringing that's so insidious is the idea (perpetrated both by academics and often by other undergrad students) that someone 'cheating' the system this way is somehow 'devaluing' other people's degrees---in a direct sense this is of course horseshit; why should i care whatsoever what you're up to across the classroom from me, if i personally am enjoying writing my essays For Real and Learning From Them---but what this argument is actually getting at is the idea that the access barrier that is an earned degree is a limited commodity by design, and so works 'less well' the more people have it; it's in fact implicitly an argument that this student doesn't deserve entry into the professional classes and shouldn't be granted it because that would make my entry ever so much harder to execute. which is to say that these people do understand that the university degree is an access barrier; they simply won't say so in as many words because they believe themselves to be ontologically People Who Should Benefit From The Barrier, unlike Those Others Over There
2K notes · View notes
trainsinanime · 2 months ago
Text
The darkly ironic thing is that if you are worried about the recent news that someone scraped Ao3 for AI research, then you're probably vastly underestimating the scale of the problem. It's way worse than you think.
For the record, a couple of days ago, someone posted a "dataset for AI research" on reddit, which was simply all publicly accessible works on Ao3, downloaded and zipped. This is good, in a way, because that ZIP file is blatantly illegal, and the OTW managed to get it taken down (though it's since been reuploaded elsewhere).
However, the big AI companies, like OpenAI, xAI, Meta and so on, as well as many you've never heard of, all probably had no interest in this ZIP file to begin with. That was only ever of interest to small-scale researchers. These companies probably already have all that data, received by scraping it themselves.
A lot of internet traffic at the moment is just AI companies sucking up whatever they can get. Wikipedia reports that about a third of all visitors are probably AI bots (and they use enormous amounts of bandwidth). A number of sites hosting software source code estimate that more than 90% of all traffic to their sites may be AI bots. It's all a bit fuzzy since most AI crawlers don't identify themselves as such, and pretend to be normal users.
The OTW hasn't released any similar data as far as I am aware, but my guess would be that Ao3 is being continuously crawled by all sorts of AI companies at every moment of the day. If you have a fanfic on Ao3, and it isn't locked to logged-in users only, then it's already going to be part of several AI training data sets. Only unlike this reddit guy, we'll never know for sure, because these AI training data sets won't be released to the public. Only the resulting AI models, or the chat bots that use these models, and whether that's illegal is… I dunno. Nobody knows. The US Supreme Court will probably answer that in 5-10 years time. Fun.
The solution I've seen from a lot of people is to lock their fics. That will, at best, only work for new fics and updates, it's not going to remove anything that e.g. OpenAI already knows.
And, of course, it assumes that these bots can't be logged in. Are they? I have no way of knowing. But if I didn't have a soul and ran an AI company, I might consider ordering a few interns to make a couple dozen to hundreds of Ao3 accounts. It costs nothing but time due to the queue system, and gets me another couple of million words probably.
In other words: I cannot guarantee that locked works are safe. Maybe, maybe not.
Also, I don't think there's a sure way to know whether any given work is included in the dataset or not. I suppose if ChatGPT can give you an accurate summary when you ask, then it's very likely to be in, but that's by no means a guarantee either way.
What to do? Honestly, I don't know. We can hope for AI companies to go bankrupt and fail, and I'm sure a lot of them will over the next five years, but probably not all of them. The answer will likely have to be political and on an international stage, which is not an easy terrain to find solutions for, well, anything.
Ultimately it's a personal decision. For myself, I think the joy I get from writing and having others read what I've written outweighs the risks, so my stories remain unlocked (and my blog posts as well, this very text will make its way into various data sets before too long, count on it). I can totally understand if others make other choices, though. It's all a mess.
Sorry to start, middle and end this on a downer, but I think it's important to be realistic here. We can't demand useful solutions for this from our politicians if we don't understand the problems.
119 notes · View notes
fursasaida · 2 years ago
Note
Hi! Just wanted to ask. How can I give my students assignments that are chat-gpt proof? Or that they won't just copy the answer without at least doing some editing?
Hi! So, I don't think anything is ChatGPT-proof. You fundamentally cannot stop people from using it to take a shortcut. You can't even stop them from copying the answer without editing it. However, I think you can work with this reality. So, you can do three things:
Don't be a cop about it.
If you make your objective "stop the children from using the thing to cheat," you are focusing on the wrong thing. You will be constantly scrutinizing every submission with suspicion, you will be accusing people of cheating--and some of them will not have cheated, and they will remember this forever--and you will be aiming at enforcement (which is trying to hold back the sea) instead of on inviting and supporting learning whenever and wherever possible. (I'll come back to this under item 2.)
Regarding why enforcement is holding back the sea: It is fundamentally rational for them to do this. We, who "love learning" (i.e. are good at what our academic system sees as learning, for various reasons have built our lives around that, happen to enjoy these activities), see everything they might cheat themselves of by doing it, because we know what we got out of doing this type of work. Many students, however--especially at the kind of school I teach at--are there to get the piece of paper that might, if they're lucky, allow them access to a relatively livable and stable income. The things that are wrong with this fact are structural and nothing to do with students' failings as people, or (tfuh) laziness, or whatever. We cannot make this not true (we can certainly try to push against it in certain ways, but that only goes so far). More pragmatically, chatgpt and similar are going to keep getting better, and detecting them is going to get harder, and your relationships with your students will be further and further damaged as you are forced to hound them more, suspect them more, falsely accuse more people, while also looking like an idiot because plenty of them will get away with it. A productive classroom requires trust. The trust goes both ways. Being a cop about this will destroy it in both directions.
So the first thing you have to do is really, truly accept that some of them are going to use it and you are not always going to know when they do. And when I say accept this, I mean you actually need to be ok with it. I find it helps to remember that the fact that a bot can produce writing to a standard that makes teachers worry means we have been teaching people to be shitty writers. I don't know that so much is lost if we devalue the 5-paragraph SAT essay and its brethren.
So the reason my policy is to say it's ok to use chatgpt or similar as long as you tell me so and give me some thinking about what you got from using it is that a) I am dropping the charade that we don't all know what's going on and thereby making it (pedagogical term) chill; b) I am modeling/suggesting that if you use it, it's a good idea to be critical about what it tells you (which I desperately want everyone to know in general, not just my students in a classroom); c) I am providing an invitation to learn from using chatgpt, rather than avoid learning by using it. Plenty of them won't take me up on that. That's fine (see item 3 below).
So ok, we have at least established the goal of coming at it from acceptance. Then what do you do at that point?
Think about what is unique to your class and your students and build assignments around that.
Assignments, of course, don't have to be simply "what did Author mean by Term" or "list the significant thingies." A prof I used to TA under gave students the option of interviewing a family member or friend about their experiences with public housing in the week we taught public housing. Someone I know who teaches a college biology class has an illustration-based assignment to draw in the artsier students who are in her class against their will. I used to have an extra-credit question that asked them to pick anything in the city that they thought might be some kind of clue about the past in that place, do some research about it, and tell me what they found out and how. (And that's how I learned how Canal St. got its name! Learning something you didn't know from a student's work is one of the greatest feelings there is.) One prompt I intend to use in this class will be something to the effect of, "Do you own anything--a t-shirt, a mug, a phone case--that has the outline of your city, state, or country on it? Why? How did you get it, and what does having this item with this symbol on it mean to you? Whether you personally have one or not, why do you think so many people own items like this?" (This is for political geography week, if anyone's wondering.)
These are all things that target students' personal interests and capabilities, the environments they live in, and their relationships within their communities. Chatgpt can fake that stuff, but not very well. My advisor intends to use prompts that refer directly to things he said in class or conversations that were had in class, rather than to a given reading, in hopes that that will also make it harder for chatgpt to fake well because it won't have the context. The more your class is designed around the specific institution you teach at and student body you serve, the easier that is to do. (Obviously, how possible that is is going to vary based on what you're teaching. When I taught Urban Studies using the city we all lived in as the example all through the semester, it was so easy to make everything very tailored to the students I had in that class that semester. That's not the same--or it doesn't work the same way--if you're teaching Shakespeare. But I know someone who performs monologues from the plays in class and has his students direct him and give him notes as a way of drawing them into the speech and its niceties of meaning. Chatgpt is never going to know what stage directions were given in that room. There are possibilities.) This is all, I guess, a long way of saying that you'll have a better time constructing assignments chatgpt will be bad at if you view your class as a particular situation, occurring only once (these people, this year), which is a situation that has the purpose of encouraging thought--rather than as an information-transfer mechanism. Of course information transfer happens, but that is not what I and my students are doing together here.
Now, they absolutely can plug this type of prompt into chatgpt. I've tried it myself. I asked it to give me a personal essay about the political geography prompt and a critical personal essay about the same thing. (I recommend doing this with your own prospective assignments! See what they'd get and whether it's something you'd grade highly. If it is, then change either the goal of the assignment or at least the prompt.) Both of them were decent if you are grading the miserable 5-paragraph essay. Both of them were garbage if you are looking for evidence of a person turning their attention for the first time to something they have taken for granted all their lives. Chatgpt has neither personality nor experiences, so it makes incredibly vague, general statements in the first person that are dull as dishwater and simply do not engage with what the prompt is really asking for. I already graded on "tell me what you think of this/how this relates to your life" in addition to "did you understand the reading," because what I care about is whether they're thinking. So students absolutely can and will plug that prompt into chatgpt and simply c/p the output. They just won't get high marks for it.
If they're fine with not getting high marks, then okay. For a lot of them this is an elective they're taking essentially at random to get that piece of paper; I'm not gonna knock the hustle, and (see item 1) I couldn't stop them if I wanted to. What I can do is try to make class time engaging, build relationships with them that make them feel good about telling me their thoughts, and present them with a variety of assignments that create opportunities for different strengths, points of interest, and ways into the material, in hopes of hooking as many different people in as many different ways as I can.
This brings me back to what I said about inviting learning. Because I have never yet in my life taught a course that was for people majoring in the subject, I long ago accepted that I cannot get everyone to engage with every concept, subject, or idea (or even most of them). All I can do is invite them to get interested in the thing at hand in every class, in every assignment, in every choice of reading, in every question I ask them. How frequently each person accepts these invitations (and which ones) is going to vary hugely. But I also accept that people often need to be invited more than once, and even if they don't want to go through the door I'm holding open for them right now, the fact that they were invited this time might make it more likely for them to go through it the next time it comes up, or the time after that. I'll never know what will come of all of these invitations, and that's great, actually. I don't want to make them care about everything I care about, or know everything I know. All I want is to offer them new ways to be curious.
Therefore: if they use chatgpt to refuse an invitation this week, fine. That would probably have happened anyway in a lot of cases even without chatgpt. But, just as before, I can snag some of those people's attention on one part of this module in class tomorrow. Some of them I'll get next time with a different type of assignment. Some of them I'll hook for a moment with a joke. I don't take the times that doesn't happen as failures. But the times that it does are all wins that are not diminished by the times it doesn't.
Actually try to think of ways to use chatgpt to promote learning.
I DREAM of the day I'm teaching something where it makes sense to have students edit an AI-written text. Editing is an incredible way to get better at writing. I could generate one in class and we could do it all together. I could give them a prompt, ask them to feed it into chatgpt, and ask them to turn in both what they got and some notes on how they think it could be better. I could give them a pretty traditional "In Text, Author says Thing. What did Author mean by that?" prompt, have them get an answer from chatgpt, and then ask them to fact-check it. Etc. All of these get them thinking about written communication and, incidentally, demonstrate the tool's limitations.
I'm sure there are and will be tons of much more creative ideas for how to incorporate chatgpt rather than fight it. (Once upon a time, the idea of letting students use calculators in math class was also scandalous to many teachers.) I have some geography-specific ideas for how to use image generation as well. When it comes specifically to teaching, I think it's a waste of time for us to be handwringing instead of applying ourselves to this question. I am well aware of the political and ethical problems with chatgpt, and that's something to discuss with, probably, more advanced students in a seminar setting. But we won't (per item 1) get very far simply insisting that Thing Bad and Thing Stupid. So how do we use it to invite learning? That's the question I'm interested in.
Finally, because tangential to your question: I think there's nothing wrong with bringing back more in-class writing and even oral exams (along with take-home assignments that appeal to strengths and interests other than expository writing as mentioned above). These assessments play to different strengths than written take-homes. For some students, that means they'll be harder or scarier; by the same token, for other students they'll be easier and more confidence-building. (Plus, "being able to think on your feet" is also a very good ~real-world skill~ to teach.) In the spirit of trying to offer as many ways in as possible, I think that kind of diversification in assignments is a perfectly good idea.
1K notes · View notes
lamusedhermes · 5 months ago
Text
Academic advices from a (non american) law student.
Premise: I feel the urge to underline the fact that I am not American nor attend any university in America due to the fact that most tips and tricks I found, coming from Americans, were scarce in terms of concrete application. If you found them to be useful, then I am more than glad. All I wish to do is to share different experiences and approaches to the university world that are maybe differing from the usual content.
I. “Time restricted” spaced repetition: the great majority of the subject in my curriculum are quite complex and portray a large number of complex topics, Latin terms, and regulations that are specific. What I suggest here is to write down in a fun colour (to me it is red) the words, terms, names and phrases that are difficult for you to remember. We are not born all knowing, and some terms can be, at first glance, peculiar or unusual. That is completely normal. Therefore, write down anything that you may struggle to remember and every day, you do your best to recall those specific terms, and over the span of even two days you will most likely incorporate even the most difficult words.
II. Repeat out loud: in my university, we do not have written exams. Therefore, practicing your speech for the exam is fundamental for us. However, even if your exams are not oral, explaining out loud subjects helps you remember them better (even if you give a look to your notes from time to time). Do this from day one of preparation. My favourite way of doing so is to repeat everything when outside, while on a walk or at a cafe.
III. Mental connections: chances are, some topics will be repeated in different ways in the same subject. For instance, the concept of inter-subjective laws was discussed three times in this one course, and each time a different aspect was discussed. What I am suggesting is that, when a particular topic or word comes up often, you force yourself to do two things: first, you do a repetition exercise in which you repeat where and when was that topic already mentioned, and second, you differentiate between the two. Why are they different, how are they different and in what ways they are similar.
IV. During the lectures: our professors do not record lectures, nor do they use any platform to “stream” them. If it is possible for you, attend the lectures! Take careful notes and correct them right away, after the lectures has finished! Ask those questions, no matter how “silly” they may be! The professor is right there for you, so you might as well use the opportunity to enrich your knowledge.
V. The notes: print them. Not only will your eyes thank you, but I find studying from paper more effective and it is easier to focus. Call me a grandma, but that is the truth. And if correcting some parts is the reason you prefer digital, try to simply cover the parts tg at you wish to rewrite eight plain paper and write the correction on it. This way the topic will be easier to be remembered.
VI. Audiobook: this may sound unusual, but listening to your notes can be quite beneficial. Due to me being a student, I have free access to the Microsoft package: world has this “read aloud” feature, and I play the audio during the night. The subconscious mind is much more powerful than what you may think of it.
VII. Grades: obviously we all aim for the greatest grades, but often the way we are graded may be out of your control. Sometimes you may get sick right before the exam, sometimes the examiner may be irritated and got up already upset with the world, sometimes we could have given better performances. It happens, and it will inevitably make you feel awful and out of place: please, remember to be kind and gentle with yourself. It will be better the next time, but in that moment remember that you are never alone. If you do not wish to talk it out with someone, ask ChatGPT. It really gives comfort and great advices in moments of frustration and disappointment. Do not ruin your life for a temporary moment.
Tumblr media
76 notes · View notes
suspiciouscatastrophe · 4 months ago
Text
An Experiment With Machine Translation/AI
Hello there, my friends! Usually, I'm posting about trans stuff. Today, I'm going to switch it up for translation.
So you see, I'm a student and translation studies are one of the degrees I'm desperately vying for. This week, one of my classes had an interesting assignment: We got an excerpt from a book (The Map of Us by Jules Preston) and our goal was not only to translate it but also create a second version, using a machine translation or ChatGPT. A lot of translation work nowadays is being more or less AI's factchecker and editor, so like it or not (and you can believe me that I don't like it), it's a skill to have.
And I thought I would share the comparisons! They're not surprising. They all show that unedited AI or MT is not by any means ready to replace human translators, the least of all in art. However, the ways they fail are quite interesting!
I was also fully prepared for ChatGPT, being, you know, a language model, to blow poor Google Translate out of the water. And while it was definitely the more creative of the two, it also performed... worse overall...? You can be the judge of that!
And of course, this post is best enjoyed if you're a Czech speaker, but if you happen not to speak Czech or Slovak, don't hesitate to read on anyway, I'll provide additional translation back into English for the fails.
And... if you happen to be a human with a penchant for translation and you're cringing at my solutions, trembling with the need to correct me, you will make me cry, but go ahead.
Example 1:
ENG: She did not ask his opinion, for he had already shown himself to be difficult and ill-tempered when it came to making a choice.
HT (human translator): Na jeho názor se ani neptala, protože už věděla, jak protivný dokázal být, když se musel rozhodovat.
GT (Google Translate): Neptala se ho na názor, protože se už ukázal jako obtížný a špatně naladěný, když došlo na volby.
[She did not ask his opinion, because he has already shown himself to be hard to do and ill-tempered when it came to elections.]
AI (ChatGPT): Neptala se na jeho názor, protože už ukázal, že je těžko rozhodnutelný a náladový, pokud šlo o výběr.
[She did not ask his opinion, because he has already shown himself to be hard to decide and moody in the matter of selection.]
In this one, I think the bots reached a tie in being wrong. Machine translation is notoriously wonky on composite sentences, however ChatGPT genuinely surprised me with somehow managing to "think" a little forward and already connecting "difficult" (těžko) with "making a choice" (rozhodnout se) and laboriously birthing "těžko rozhodnutelný" which would mean "difficult to decide/judge" but in the context of e.g. a competition being tight and it being difficult for the judges to choose the winner. It's not used with sentient beings at all. In my version, I've decided to merge the descriptors "difficult and ill-tempered" into one word "protivný" which is a little more general in meaning, so it encompasses both and drives home the idea of being a neutral way to call out somebody's mean and problem-generating behavior.
Example 2:
ENG: Every time she would have to press a small button and the top of the Royal Quiet Deluxe typewriter would pop open automatically, making the interior of the machine accessible.
HT: Pokaždé, když se to stalo, musela zmáčknout čudlík, aby se horní část jejího psacího stroje Royal Quiet Deluxe automaticky odklopila a ona mohla sáhnout dovnitř.
GT: Pokaždé by musela stisknout malé tlačítko a horní část psacího stroje Royal Quiet Deluxe se automaticky otevřela, čímž se zpřístupnil vnitřek stroje.
(no fail here as far as I can tell)
AI: Každýkrát musela stisknout malé tlačítko a víko psacího stroje Royal Quiet Deluxe se automaticky otevřelo, což umožnilo přístup k vnitřku stroje.
[Everyonetimes she had to press a small button and the top of the Royal Quiet Deluxe typewriter would pop open automatically, making it possible to access the interior of the machine.]
Now, the word "každýkrát" means "every time," however, in SLOVAK, not in Czech. Two completely different languages. In Czech? "Každý" means "everyone" and "krát" means "time(s)" (as in multiplying something), so "everyonetimes." I'm quite sure that passing the word as Czech counts as an anti-Slovak microaggression XD Google Translate slayed here, no mistakes, I would just word it differently because I've got that Human Touch TM and an insufferable superiority complex.
Example 3:
ENG: There was an 'e' in 'nose' and 'ear.'
HT: Nos i ucho měly 'o.'
This one is rigged against both GT and AI because what they do is gluing the most probable translation equivalents together, and they have no way of knowing that the letter the words "nose" (nos) and "ear" (ucho) share in Czech is "o" and not "e". GT performed as expected.
GT: V 'nosu' a 'uchu' bylo 'e'.
(besides making it sound a little like the letter was physically in someone's nose and ear, it's fine)
But ChatGPT? I wouldn't actually expect it to solve the problem and correctly switch the letters, but given that it is supposed to be much more advanced than GT, I had my hopes high and thought that it would show at least a trace of acknowledgment of the part being somehow difficult. AI: Bylo tam 'e' ve slovech 'nos' a 'ucho'.
[There was an 'e' in the words 'nos' and 'ucho'.] (liar)
Despite those words noticeably lacking in the "e" department, it's worth noting that while GT went out of its way to try to make it sound natural (too natural, in fact), its supposedly smarter cousin fully copied the English sentence structure, which is technically gramatically correct but sounds extremely reverse-Czenglish. (I guess if Czenglish is a Czech-speaking learner's English influenced by Czech, that makes an English-speaking learner's Czech... Ech. It sounds extremely Ech.)
Example 4:
ENG: Did it matter what an imaginary explorer's lips looked like or felt like?
HT: Záleželo vůbec na tom, jak vypadaly rty imaginárního dobrodruha, nebo jaké byly na dotek?
GT: Záleželo na tom, jak vypadaly nebo jak vypadaly rty imaginárního průzkumníka?
[Did it matter what an imaginary explorer's lips looked like or looked like?]
AI: Záleželo na tom, jak vypadají nebo jak se cítí rty imaginárního dobrodruha?
[Did it matter what an imaginary explorer's lips looked like or how they were feeling?]
Do facts care about the your (lips') feelings? As someone who likes to play around with GT quite a lot, I'm very familiar with the "when in doubt, just repeat whatever you've last said" strategy. If you fuck with it long enough, you can make the poor tool repeat one phrase so many times it fills the entire line. However, the mistake ChatGPT made perplexes me. It feels very humanizing. It's a mistake I can easily imagine an actual learner making, failing to distinguish between "to feel like" (comparison) and "to feel" (to experience a feeling). It's a mistake I would totally make several years ago, and fuck it, maybe even now if I'm trying to read fast on a very bad day. Good job at doing a bad job, GPT.
Anyway, these 4 were probably the funniest and most interesting of the whole excerpt. I must admit that analyzing them turned out to be a humbling experience because revisiting my work, I've noticed several mistakes I've made as well that I can't take back, because I've already turned the assignment in. Oh well. However, I did have fun!
I hope you had fun, too! Stay těžko rozhodnutelní!
36 notes · View notes
justafewberries · 1 day ago
Note
What is the likelihood of District 13 having the institutions for philosophy and development ideological thought in your opinion?
What are the possible institutions that shaped rebel ideology in Panem?
Adding the context from the second ask you sent here: Not saying districts are stupid in the previous question what I mean is in a place like Panem how would ideas spread and be developed.
I'd say the likelihood for the development of a marketplace of ideas in d13 is low. A development of a diverse range of philosophies doesn't usually occur under militarism/totalitarian regimes. It does happen, of course, I just don't think it would in 13 for a few reasons:
Population size— it's very small, and they accepted d12 with open arms partially because their population could no longer sustain itself.
Everything is controlled, from scheduling to personal effects
Limited access to information— we don't know how schooling works in d13. They may teach a bunch of diverse philosophies, but I doubt it based on how Coin does not like things that could disrupt the status quo. She likes to keep things uniform (haha, get it, gray uniforms?)
So while I wouldn't rule a diverse marketplace of ideas out entirely, I would say the chances are next to none. I do think they have their own standard philosophies, such as valuing the greatest good and conformity being the most productive behavior, but I don't think it would lend itself to something we see historically crop up when people get access to unlimited information (such as the development of the printing press).
However! You also asked:
What are the possible institutions that shaped rebel ideology in Panem?
We know Plutarch has books. There are books in the districts, but there is both not an interest in them and limited access to them. The Capitol can access books, but similar to f451, they don't care to, as Plutarch tells us in sotr.
Beyond books, word of mouth and stories. Within the districts, there are generational inheritances. In District 11, they have field songs. Stories can be passed down in multiple forms, and music happens to be one of the most popular (see: ballads).
We also know they bombed the factory in d8, where people are able to speak beneath the hum of the machines. Word of mouth is very important there. A lot of the manual labor we know of the districts has people working in groups. They would have a lot of time to speak to one another, and a lot of time to trade ideas.
Now, from a perspective of the rebellion, much of the orchestration comes top-down from the Capitol, whether it is Victors, who have had enough to eat, or Capitol spies and insiders, we do not really see districts rebel until it comes time. Which to me says while the Capitol is the closest to Snow, they also have the most access to information.
Of course there is brainwashing and propaganda, but there is also a level of comfort that keeps people complacent. Plutarch could have chosen to exist within the comfort, but he saw the looming storms on the horizon and the consequences of the regime coming for him.
It might just be the literature and F451 lover in me, but I think, largely, the access to information is what fleshed out the ideologies of the rebels. People in the Capitol can access information easier than anyone else. It's why Beetee won't stop trying to hack the systems. He knows the Capitol houses the information.
This would be super dangerous, except, there's propaganda to overwrite any desire to seek out the truth. It's why Captiol citizens blindly believe what they're told. It's easy. Why concern yourself with a storm across the country when you could gorge yourself on amuse-bouches?
Why would the Capitol people seek out knowledge, books, and history when their own version of chatgpt, the propaganda, is right there doing the work for them? It's more peaceful to just believe what you're told. You're happier that way.
Which is why learning things can be uncomfortable. It should be.
22 notes · View notes
lukadjo · 8 months ago
Text
Hey!
Do you have a website? A personal one or perhaps something more serious?
Whatever the case, if you don't want AI companies training on your website's contents, add the following to your robots.txt file:
User-agent: *
Allow: /
User-agent: anthropic-ai
Disallow: /
User-agent: Claude-Web
Disallow: /
User-agent: CCbot
Disallow: /
User-agent: FacebookBot
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: GPTBot
Disallow: /
User-agent: PiplBot
Disallow: /
User-agent: ByteSpider
Disallow: /
User-agent: PerplexityBot
Disallow: /
User-agent: cohere-ai
Disallow: /
User-agent: ChatGPT-User
Disallow: /
User-agent: Omgilibot
Disallow: /
User-agent: Omgili
Disallow: /
There are of course more and even if you added them they may not cooperate, but this should get the biggest AI companies to leave your site alone.
Important note: The first two lines declare that anything not on the list is allowed to access everything on the site. If you don't want this, add "Disallow:" lines after them and write the relative paths of the stuff you don't want any bots, including google search to access. For example:
User-agent: *
Allow: /
Disallow: /super-secret-pages/secret.html
If that was in the robots.txt of example.com, it would tell all bots to not access
https://example.com/super-secret-pages/secret.html
And I'm sure you already know what to do if you already have a robots txt, sitemap.xml/sitemap.txt etc.
89 notes · View notes
bonni · 3 months ago
Text
Tumblr media
of course, the perfect solution to YOU being bad at your job and not being able to engage your students enough to properly explain to them why chatgpt is not a replacement for essay writing skills is to punish your students by depriving them of access to technologies that they have probably been using for over a decade at this point
24 notes · View notes
sag-dab-sar · 11 months ago
Text
Clarification: Generative AI does not equal all AI
💭 "Artificial Intelligence"
AI is machine learning, deep learning, natural language processing, and more that I'm not smart enough to know. It can be extremely useful in many different fields and technologies. One of my information & emergency management courses described the usage of AI as being a "human centaur". Part human part machine; meaning AI can assist in all the things we already do and supplement our work by doing what we can't.
💭 Examples of AI Benefits
AI can help advance things in all sorts of fields, here are some examples:
Emergency Healthcare & Disaster Risk X
Disaster Response X
Crisis Resilience Management X
Medical Imaging Technology X
Commercial Flying X
Air Traffic Control X
Railroad Transportation X
Ship Transportation X
Geology X
Water Conservation X
Can AI technology be used maliciously? Yeh. Thats a matter of developing ethics and working to teach people how to see red flags just like people see red flags in already existing technology.
AI isn't evil. Its not the insane sentient shit that wants to kill us in movies. And it is not synonymous with generative AI.
💭 Generative AI
Generative AI does use these technologies, but it uses them unethically. Its scraps data from all art, all writing, all videos, all games, all audio anything it's developers give it access to WITHOUT PERMISSION, which is basically free reign over the internet. Sometimes with certain restrictions, often generative AI engineers—who CAN choose to exclude things—may exclude extremist sites or explicit materials usually using black lists.
AI can create images of real individuals without permission, including revenge porn. Create music using someones voice without their permission and then sell that music. It can spread disinformation faster than it can be fact checked, and create false evidence that our court systems are not ready to handle.
AI bros eat it up without question: "it makes art more accessible" , "it'll make entertainment production cheaper" , "its the future, evolve!!!"
💭 AI is not similar to human thinking
When faced with the argument "a human didn't make it" the come back is "AI learns based on already existing information, which is exactly what humans do when producing art! We ALSO learn from others and see thousands of other artworks"
Lets make something clear: generative AI isn't making anything original. It is true that human beings process all the information we come across. We observe that information, learn from it, process it then ADD our own understanding of the world, our unique lived experiences. Through that information collection, understanding, and our own personalities we then create new original things.
💭 Generative AI doesn't create things: it mimics things
Take an analogy:
Consider an infant unable to talk but old enough to engage with their caregivers, some point in between 6-8 months old.
Mom: a bird flaps its wings to fly!!! *makes a flapping motion with arm and hands*
Infant: *giggles and makes a flapping motion with arms and hands*
The infant does not understand what a bird is, what wings are, or the concept of flight. But she still fully mimicked the flapping of the hands and arms because her mother did it first to show her. She doesn't cognitively understand what on earth any of it means, but she was still able to do it.
In the same way, generative AI is the infant that copies what humans have done— mimicry. Without understanding anything about the works it has stolen.
Its not original, it doesn't have a world view, it doesn't understand emotions that go into the different work it is stealing, it's creations have no meaning, it doesn't have any motivation to create things it only does so because it was told to.
Why read a book someone isn't even bothered to write?
Related videos I find worth a watch
ChatGPT's Huge Problem by Kyle Hill (we don't understand how AI works)
Criticism of Shadiversity's "AI Love Letter" by DeviantRahll
AI Is Ruining the Internet by Drew Gooden
AI vs The Law by Legal Eagle (AI & US Copyright)
AI Voices by Tyler Chou (Short, flash warning)
Dead Internet Theory by Kyle Hill
-Dyslexia, not audio proof read-
72 notes · View notes