#ChatGPT Deep Research
Explore tagged Tumblr posts
Text
A new lightweight version of ChatGPT Deep Research will provide higher usage limits for paid users, while free users are getting access to the new agentic capability for the first time.
0 notes
Text
Computer Vision
👁️🗨️ TL;DR – Computer Vision Lecture Summary (with emojis) 🔍 What It Is:Computer Vision (CV) teaches machines to “see” and understand images & videos like humans — but often faster and more consistently. Not to be confused with just image editing — it’s about interpretation. 🧠 How It Works:CV pipelines go from capturing images ➡️ cleaning them up ➡️ analyzing them with AI (mostly deep…
#ai#Artificial Intelligence#ChatGPT Deep Research#Computer Vision#Google Gemini Deep Research#image classification#vision systems
0 notes
Text
How to Use Google Gemini for SEO in 2025: Content, Local, Audits
Learn how to use Google Gemini to boost SEO in 2025, from on-page and local optimization to AI-powered audits, data analysis, and strategy tips. How to Use Google Gemini for Better SEO: A Comprehensive 2025 Guide Google’s AI evolution has delivered a new SEO co-pilot: Gemini. More than a content generator, Gemini is a deeply embedded assistant that can power on-page SEO, local strategies,…
#AI for SEO#Gemini Deep Research#Gemini Docs SEO#Gemini for Google Business Profile#Gemini for on-page SEO#Gemini local SEO#Gemini SEO data analysis#Gemini SEO reporting#Gemini Sheets SEO#Gemini technical SEO#Gemini vs ChatGPT SEO#Google Gemini SEO#Google Workspace SEO#SEO in 2025#SEO with Google Gemini
0 notes
Text
Find Apartments for Rent in Sri Lanka – Premium Listings in Colombo & Rajagiriya
Discover your ideal apartment with Elegant Real Estate, offering a curated selection of rental properties across Colombo 2, 3, 5, 7, 8, and Rajagiriya. Whether you're seeking a budget-friendly studio or a luxury penthouse, our listings feature properties with modern amenities in prime locations.
Benefit from our industry expertise, transparent dealings, and advanced marketing strategies to find a residence that suits your lifestyle and budget. Our team is dedicated to providing dependable service, ensuring a seamless rental experience.
Looking for a hassle-free apartment rental in Colombo or Rajagiriya?

#ElegantRealEstate#ApartmentsForRentSriLanka#ColomboRentals#RajagiriyaApartments#LuxuryApartmentsColombo#BudgetFriendlyRentals#SriLankaRealEstate#ColomboLiving#ApartmentHuntingSriLanka#RentalPropertiesColombo#For more information and to browse available listings#visit Elegant Real Estate’s official website.#Sources#Search#Reason#Deep research#Create image#ChatGPT can make mistakes. Check important info.#?
0 notes
Text
🚀 OpenAI's Deep Research is finally here! 🎓📊 This AI tool in ChatGPT can generate 📄 structured reports in minutes ⏩. Say goodbye to hours of manual research! 🕒💻 Whether you're a student 🎓, professional 💼, or just curious 🤔 — this tool is a game-changer! ✅💯 💡 Get fast, credible, and well-cited reports in no time! 📊📑 👉 Try it now and boost your research efficiency! 🔥🔗 #OpenAI #DeepResearch #AIReports #ChatGPT #AIResearch #TechInnovation #AIInsights 💡📊
#AI benchmarks#AI reports#AI Research Agent#AI-powered research#ChatGPT#credible AI research#Deep Research#OpenAI#OpenAI tool#research automation
0 notes
Text
Revoluția AI Local: Open WebUI și Puterea GPU-urilor NVIDIA în 2025
Într-o eră dominată de inteligența artificială bazată pe cloud, asistăm la o revoluție tăcută: aducerea AI-ului înapoi pe computerele personale. Apariția Open WebUI, alături de posibilitatea de a rula modele de limbaj de mari dimensiuni (LLM) local pe GPU-urile NVIDIA, transformă modul în care utilizatorii interacționează cu inteligența artificială. Această abordare promite mai multă…
#AI autonomy#AI eficient energetic#AI fără abonament#AI fără cloud#AI for automation#AI for coding#AI for developers#AI for research#AI on GPU#AI optimization on GPU#AI pe desktop#AI pe GPU#AI pentru automatizare#AI pentru cercetare#AI pentru dezvoltatori#AI pentru programare#AI privacy#AI without cloud#AI without subscription#alternative la ChatGPT#antrenare AI personalizată#autonomie AI#ChatGPT alternative#confidențialitate AI#costuri AI reduse#CUDA AI#deep learning local#desktop AI#energy-efficient AI#future of local AI
0 notes
Text
OpenAI, ChatGPT İçin Derin Araştırma Özelliğini Tanıttı
🧠 ChatGPT Pro’ya “Deep Research” Geldi! 📢 Yapay zeka artık 5-30 dakika içinde detaylı araştırmalar yapabiliyor ve kaynaklarla desteklenmiş sonuçlar sunuyor. 🔍 OpenAI, bu yeni özelliğin karmaşık araştırmalar için büyük bir dönüşüm yaratacağını söylüyor. ChatGPT’nin Yeni Deep Research Özelliği Nedir? 📌 OpenAI, ChatGPT Pro kullanıcıları için “Deep Research” aracını duyurdu. 📢 Bu özellik, yapay…
#AI Araştırma#AI Yenilikleri#ChatGPT#Deep Research#Google Project Mariner#OpenAI#Teknoloji#Yapay Zeka
0 notes
Text
The World of Artificial Intelligence: Applications, Challenges, Future Trends
Artificial Intelligence (AI) is a field of computer science and engineering that deals with the creation of intelligent machines that work and behave like humans. These machines are programmed to learn from experience, adapt to new situations, and make decisions based on data and algorithms. AI has become one of the most important technological breakthroughs of the 21st century, transforming the…

View On WordPress
#ai advancements#ai applications#ai challenges#ai ethics#ai future trends#ai impact#ai in education#ai in finance#ai in healthcare#ai in industry#ai innovations#ai research#ai technology#artificial intelligence#chatbots#chatgpt#deep learning#grow business#natural language processing#robotics and ai
0 notes
Text
Land Distribution and Development in Marin County, California
🔍 Overview Marin County balances pristine nature with urban edges — and this report breaks down exactly who owns what, how much is protected, and what’s getting built. 🌲🏙️ 📏 Total Land Area 🗺️ 333,056 acres(That’s 520.4 square miles — confirmed ✅ via U.S. Census Bureau) 🏞️ State Parks (~14,200 acres total) All major parks verified and filled in ✅: Mt. Tamalpais: 6,300 acres Angel Island:…
0 notes
Text
AI continues to be useful, annoying everyone
Okay, look - as much as I've been fairly on the side of "this is actually a pretty incredible technology that does have lots of actual practical uses if used correctly and with knowledge of its shortfalls" throughout the ongoing "AI era", I must admit - I don't use it as a tool too much myself.
I am all too aware of how small errors can slip in here and there, even in output that seems above the level, and, perhaps more importantly, I still have a bit of that personal pride in being able to do things myself! I like the feeling that I have learned a skill, done research on how to do a thing and then deployed that knowledge to get the result I want. It's the bread and butter of working in tech, after all.
But here's the thing, once you move beyond beginner level Python courses and well-documented windows applications. There will often be times when you will want to achieve a very particular thing, which involves working with a specialist application. This will usually be an application written for domain experts of this specialization, and so it will not be user-friendly, and it will certainly not be "outsider-friendly".
So you will download the application. Maybe it's on the command line, has some light scripting involved in a language you've never used, or just has a byzantine shorthand command structure. There is a reference document - thankfully the authors are not that insane - but there are very few examples, and none doing exactly what you want. In order to do the useful thing you want to do, they expect you to understand how the application/platform/scripting language works, to the extent that you can apply it in a novel context.
Which is all fine and well, and normally I would not recommend anybody use a tool at length unless they have taken the time to understand it to the degree at which they know what they are doing. Except I do not wish to use the tool at length, I wish to do one, singular operation, as part of a larger project, and then never touch it again. It is unfortunately not worth my time for me to sink a few hours into learning a technology that you will use once for twenty seconds and then never again.
So you spend time scouring the specialist forums, pulling up a few syntax examples you find randomly of their code and trying to string together the example commands in the docs. If you're lucky, and the syntax has enough in common with something you're familiar with, you should be able to bodge together something that works in 15-20 minutes.
But if you're not lucky, the next step would have been signing up to that forum, or making a post on that subreddit, creating a thread called "Hey, newbie here, needing help with..." and then waiting 24-48 hours to hear back from somebody probably some years-deep veteran looking down on you with scorn for not having put in the effort to learn their Thing, setting aside the fact that you have no reason to normally. It's annoying, disruptive, and takes time.
Now I can ask ChatGPT, and it will have ingested all those docs, all those forums, and it will give you a correct answer in 20 seconds about what you were doing wrong. Because friends, this is where a powerful attention model excels, because you are not asking it to manage a complex system, but to collate complex sources into a simple synthesis. The LLM has already trained in this inference, and it can reproduce it in the blink of an eye, and then deliver information about this inference in the form of a user dialog.
When people say that AI is the future of tutoring, this is what it means. Instead of waiting days to get a reply from a bored human expert, the machine knowledge blender has already got it ready to retrieve via a natural language query, with all the followup Q&A to expand your own knowledge you could desire. And the great thing about applying this to code or scripting syntax is that you can immediately verify whether the output is correct but running it and seeing if it performs as expected, so a lot of the danger is reduced (not that any modern mainstream attention model is likely to make a mistake on something as simple a single line command unless it's something barely documented online, that is).
It's incredibly useful, and it outdoes the capacity of any individual human researcher, as well as the latency of existing human experts. That's something you can't argue we've ever had better before, in any context, and it's something you can actively make use of today. And I will, because it's too good not to - despite my pride.
130 notes
·
View notes
Text
I've said this before but the interesting thing about AI in science fiction is that it was often a theme that humanity would invent "androids", as in human-like robots, but for them to get intelligent and be able to carry conversations with us about deep topics they would need amazing advances that might be impossible. Asimov is the example here though he played a lot with this concept.
We kind of forgot that just ten years ago, inventing an AI that could talk fluently with a human was considered one of those intractable problems that we would take centuries to solve. In a few years not only we got that, but we got AI able to generate code, write human-like speech, and imitate fictional characters. I'm surprised at how banal some people arguing about AI are about this, this is, by all means, an amazing achievement.
Of course these aren't really intelligent, they are just complex algorithms that provide the most likely results to their request based on their training. There also isn't a centralized intelligence thinking this, it's all distributed. There is no real thinking here, of course.
Does this make it less of a powerful tool, though? We have computers that can interpret human language and output things on demand to it. This is, objectively, amazing. The problem is that they are made by a capitalist system and culture that is trying to use them for a pointless economic bubble. The reason why ChatGPT acts like the world's most eager costumer service is because they coded it for that purpose, the reason why most image generators create crap is because they made them for advertising. But those are not the only possibilities for AI, even this model of non-thinking AIs.
The AI bubble will come and pop, it can't sustain itself. The shitty corporate models will never amount to much because they're basically toys. I'm excited for what comes after, when researchers, artists, and others finally get models that aren't corporate shit tailored to be costumer service, but built for other purposes. I'm excited to see what happens when this research starts to create algorithms that might actually be alive in any sense, and maybe the lines might not exist. I'm also worried too.
#cosas mias#I hate silicon valley types who are like 'WITH AI WE WILL BE ABLE TO FIRE ALL WORKERS AND HAVE 362% ANNUAL GROWTH#but I also hate the neo luddites that say WHY ARE YOU MAKING THIS THERE IS NO USE FOR THIS#If you can't imagine what a computer that does what you ask in plain language could potentially do#maybe you're the one lacking imagination not the technobros
90 notes
·
View notes
Text
Erster Test der neuen „Deep Research“ Funktion von Chat GPT. Während Chat GPT arbeitet, kann man ihm beim Denken zusehen. Als ich ihn gefragt habe, wie er seine Quellen auswählt und ob ich das beeinflussen kann, lautete seine Antwort wie folgt:
“My sources are chosen based on a combination of relevance, credibility, and availability. When I conduct a web search, I prioritise academic sources (books, journal articles, university websites), respected encyclopaedias (e.g., Britannica), and primary sources when possible. The search results depend on what is currently indexed and accessible online. However, the process is not entirely under my control, as I rely on external tools to retrieve and summarise information.
Influencing My Search
Yes, you can influence my search in several ways:
Specify preferred types of sources – If you want me to focus on peer-reviewed journal articles, primary sources, or books, I can refine my approach accordingly.
Request particular authors or perspectives – If you are interested in Marxist interpretations or a specific historian’s analysis, I can tailor my search.
Define the scope – You can ask for recent scholarship or historiographical debates instead of general overviews.
Providing URLs or PDFs
At the moment, I cannot directly access or analyse external PDFs or paywalled content. However, you can:
Provide text excerpts – If you paste sections from a PDF or a book, I can analyse, summarise, or compare them with existing research.
Give URLs to open-access sources – If a source is publicly available online, I can check it and integrate it into my response.”
0 notes
Text
Excerpts:
"The convenience of instant answers that LLMs provide can encourage passive consumption of information, which may lead to superficial engagement, weakened critical thinking skills, less deep understanding of the materials, and less long-term memory formation [8]. The reduced level of cognitive engagement could also contribute to a decrease in decision-making skills and in turn, foster habits of procrastination and "laziness" in both students and educators [13].
Additionally, due to the instant availability of the response to almost any question, LLMs can possibly make a learning process feel effortless, and prevent users from attempting any independent problem solving. By simplifying the process of obtaining answers, LLMs could decrease student motivation to perform independent research and generate solutions [15]. Lack of mental stimulation could lead to a decrease in cognitive development and negatively impact memory [15]. The use of LLMs can lead to fewer opportunities for direct human-to-human interaction or social learning, which plays a pivotal role in learning and memory formation [16].
Collaborative learning as well as discussions with other peers, colleagues, teachers are critical for the comprehension and retention of learning materials. With the use of LLMs for learning also come privacy and security issues, as well as plagiarism concerns (7]. Yang et al. [17] conducted a study with high school students in a programming course. The experimental group used ChatGPT to assist with learning programming, while the control group was only exposed to traditional teaching methods. The results showed that the experimental group had lower flow experience, self-efficacy, and learning performance compared to the control group.
Academic self-efficacy, a student's belief in their "ability to effectively plan, organize, and execute academic tasks"
', also contributes to how LLMs are used for learning [18]. Students with
low self-efficacy are more inclined to rely on Al, especially when influenced by academic stress
[18]. This leads students to prioritize immediate Al solutions over the development of cognitive and creative skills. Similarly, students with lower confidence in their writing skills, lower
"self-efficacy for writing" (SEWS), tended to use ChatGPT more extensively, while higher-efficacy students were more selective in Al reliance [19]. We refer the reader to the meta-analysis [20] on the effect of ChatGPT on students' learning performance, learning perception, and higher-order thinking."
"Recent empirical studies reveal concerning patterns in how LLM-powered conversational search systems exacerbate selective exposure compared to conventional search methods. Participants engaged in more biased information querying with LLM-powered conversational search, and an opinionated LLM reinforcing their views exacerbated this bias [63]. This occurs because LLMS are in essence "next token predictors" that optimize for most probable outputs, and thus can potentially be more inclined to provide consonant information than traditional information system algorithms [63]. The conversational nature of LLM interactions compounds this effect, as users can engage in multi-turn conversations that progressively narrow their information exposure. In LLM systems, the synthesis of information from multiple sources may appear to provide diverse perspectives but can actually reinforce existing biases through algorithmic selection and presentation mechanisms.
The implications for educational environments are particularly significant, as echo chambers can fundamentally compromise the development of critical thinking skills that form the foundation of quality academic discourse. When students rely on search systems or language models that systematically filter information to align with their existing viewpoints, they might miss opportunities to engage with challenging perspectives that would strengthen their analytical capabilities and broaden their intellectual horizons. Furthermore, the sophisticated nature of these algorithmic biases means that a lot of users often remain unaware of the information gaps in their research, leading to overconfident conclusions based on incomplete evidence. This creates a cascade effect where poorly informed arguments become normalized in academic and other settings, ultimately degrading the standards of scholarly debate and undermining the educational mission of fostering independent, evidence-based reasoning."
"In summary, the Brain-only group's connectivity suggests a state of increased internal coordination, engaging memory and creative thinking (manifested as theta and delta coherence across cortical regions). The Engine group, while still cognitively active, showed a tendency toward more focal connectivity associated with handling external information (e.g. beta band links to visual-parietal areas) and comparatively less activation of the brain's long-range memory circuits. These findings are in line with literature: tasks requiring internal memory amplify low-frequency brain synchrony in frontoparietal networks [77], whereas outsourcing information (via internet search) can reduce the load on these networks and alter attentional dynamics. Notably, prior studies have found that practicing internet search can reduce activation in memory-related brain areas [831, which dovetails with our observation of weaker connectivity in those regions for Search Engine group. Conversely, the richer connectivity of Brain-only group may reflect a cognitive state akin to that of high performers in creative or memory tasks, for instance, high creativity has been associated with increased fronto-occipital theta connectivity and intra-hemispheric synchronization in frontal-temporal circuits [81], patterns we see echoed in the Brain-only condition."
"This correlation between neural connectivity and behavioral quoting failure in LLM group's participants offers evidence that:
1. Early Al reliance may result in shallow encoding.
LLM group's poor recall and incorrect quoting is a possible indicator that their earlier essays were not internally integrated, likely due to outsourced cognitive processing to the LLM.
2. Withholding LLM tools during early stages might support memory formation.
Brain-only group's stronger behavioral recall, supported by more robust EEG connectivity, suggests that initial unaided effort promoted durable memory traces, enabling more effective reactivation even when LLM tools were introduced later.
Metacognitive engagement is higher in the Brain-to-LLM group.
Brain-only group might have mentally compared their past unaided efforts with tool-generated suggestions (as supported by their comments during the interviews), engaging in self-reflection and elaborative rehearsal, a process linked to executive control and semantic integration, as seen in their EEG profile.
The significant gap in quoting accuracy between reassigned LLM and Brain-only groups was not merely a behavioral artifact; it is mirrored in the structure and strength of their neural connectivity. The LLM-to-Brain group's early dependence on LLM tools appeared to have impaired long-term semantic retention and contextual memory, limiting their ability to reconstruct content without assistance. In contrast, Brain-to-LLM participants could leverage tools more strategically, resulting in stronger performance and more cohesive neural signatures."
#anti ai#chat gpt#enshittification#brain rot#ai garbage#it's too bad that the people who need to read this the most already don't read for themselves anymore
42 notes
·
View notes
Text
On the subject of AI...
Okay so, I have been seeing more and more stuff related to AI-generated art recently so I’m gonna make my stance clear:
I am strongly against generative AI. I do not condone its usage personally, professionally, or in any other context.
More serious take under the cut, I am passionate about this subject:
So, first thing’s first, I’ll get my qualifications out of the way: BSc (Hons) Computer Science with a specialty in Artificial Intelligence systems and Data Security and Governance. I wrote my thesis, and did multiple R&D-style papers, on the subject. On the lower end I also have (I think the equivalent is an associate’s?) qualifications in art and IT systems. I’m not normally the type to pull the ‘well actually 🤓☝️’ card but, I'm laying some groundwork here to establish that I am heavily involved in the fields this subject relates to, both academically and professionally.
So what is 'AI' in this context?
Nowadays when someone says ‘AI’, they’re most likely talking about Generative Artificial Intelligence – it’s a subtype of AI system that is used, primarily, to produce images, text, videos, and other media formats (thus, generative).
By this point, we’ve all heard of the likes of ChatGPT, Midjourney, etc – you get the idea. These are generative AI systems used to create the above mentioned content types.
Now, you might be inclined to think things such as:
‘Well, isn’t that a good thing? Creating stuff just got a whole lot easier!’
‘I struggle to draw [for xyz reason], so this is a really useful tool’
‘I’m not an artist, so it’s nice to be able to have something that makes things how I want them to look’
No, it’s not a good thing, and I’ll tell you exactly why.
-------------------------------------------------
What makes genAI so bad?
There’s a few reasons that slate AI as condemnable, and I’ll do my best to cover them here as concisely as I reasonably can. Some of these issues are, admittedly, hypothetical in nature – the fact of the matter is, this is a technology that has come to rise faster than people and legislature (law) can even keep up with.
Stealing Is Bad, M’kay?
Now you’re probably thinking, hold on, where does theft come into this? So, allow me to explain.
Generative AI systems are able to output the things that they do because first and foremost, they’re ‘trained’: fed lots and lots of data, so that when it’s queried with specific parameters, the result is media generated to specification. Most people understand this bit – I mean, a lot of us have screwed around with ChatGPT once or twice. I won't lie and say I haven't, because I have. Mainly for research purposes, but still. (The above is a massive simplification of the matter, because I ain't here to teach you at a university level)
Now, give some thought to where exactly that training data comes from.
Typically, this data is sourced from the web; droves of information are systematically scraped from just about every publicly available domain available on the internet, whether that be photographs someone took, art, music, writing…the list goes on. Now, I’ll underline the core of this issue nice and clearly so you get the point I’m making:
It’s not your work.
Nor does it belong to the people responsible for these systems; untold numbers of people have had their content - potentially personal content, copyrighted content - taken and used for data training. Think about it – one person having their stuff stolen and reused is bad, right? Now imagine you’ve got a whole bunch of someones who are having their stuff taken, likely without them even knowing about it, and well – that’s, obviously, very bad. Which sets up a great segue into the next point:
Potential Legislation Issues
For the sake of readability, I’ll try not to dive too deep into legalese here. In short – because of the inherent nature of genAI (that is, the taking-and-using of potentially private and licensed material), there may come a time where this poses a very real legal issue in terms of usage rights.
At the time of writing, legislation hasn’t caught up – there aren't any ratified laws that state how, and where, big AI systems such as ChatGPT can and cannot source training data. Many arguments could be made that the scope and nature of these systems practically divorces generated content from its source material, however many do not agree with this sentiment; in fact, there have been some instances of people seeking legal action due to perceived copyright infringement and material reuse without fair compensation.
It might not be in violation of laws on paper right now, but it certainly violates the spirit of these laws – laws that are designed to protect the works of creatives the world over.
AI Is Trash, And It’s Getting Trashier
Woah woah woah, I thought this was a factual document, not an opinion piece!
Fair. I’d be a liar if I said it wasn’t partly rooted in opinion, but here’s the fact: genAI is, objectively, getting worse. I could get really technical with the why portion, but I’m not rewriting my thesis here, so I’ll put it as simply as possible:
AI gets trained on Internet Stuff. AI is dubiously correct at best because of how it aggregates data (that is, from everywhere, even the factually-incorrect places)
People use AI to make stuff. They take this stuff at face value, and they don’t sanity check it against actual trusted sources of information (or a dictionary. Or an anatomy textbook)
People put that stuff back on the internet, be it in the form of images, written statements, "artwork", etc
Loop back to step 1
In the field of Artificial Intelligence this is sometimes called a runaway feedback loop: it’s the mother of all feedback loops that results in aggregated information getting more and more horrifically incorrect, inaccurate, and poorly put-together over time. Everything from facts to grammar, to that poor anime character’s sixth and seventh fingers – nothing gets spared, because there comes a point where these systems are being trained on their own outputs.
I somewhat affectionately refer to this as ‘informational inbreeding’; it is becoming the pug of the digital landscape, buggled eyes and all.
Now I will note, runaway feedback loops are typically referencing algorithmic bias - but if I'm being honest, it's an apt descriptor for what's happening here too.
This trend will, inevitably, continue to get worse over time; the prevalence of AI generated media is so commonplace now that it’s unavoidable – that these systems are going to be eating their own tails until they break.
-------------------------------------------------
But I can’t draw/write! What am I supposed to do?
The age-old struggle – myself and many others sympathize, we really do. Maybe you struggle to come up with ideas, or to put your thoughts to paper cohesively, or drawing and writing is just something you’ve never really taken the time to develop before, but you’re really eager to make a start for yourself.
Maybe, like many of us including myself, you have disabilities that limit your mobility, dexterity, cognition, etc. Not your fault, obviously – it can make stuff difficult! It really can! And it can be really demoralizing to feel as though you're limited or being held back by something you can't help.
Here’s the thing, though:
It’s not an excuse, and it won’t make you a good artist.
The very artists you may or may not look up to got as good as they did by practicing. We all started somewhere, and being honest, that somewhere is something we’d cringe at if we had to look back at it for more than five minutes. I know I do. But in the context of a genAI-dominated internet nowadays, it's still something wonderfully human.
There are also many, many artists across history and time with all manner of disabilities, from chronic pain to paralysis, who still create. No two disabilities are the same, a fact I am well aware of, but there is ample proof that sheer human tenacity is a powerful tool in and of itself.
Or, put more bluntly and somewhat callously: you are not a unique case. You are not in some special category that justifies this particular brand of laziness, and your difficulties and struggles aren't license to take things that aren't yours.
The only way you’re going to create successfully? Is by actually creating things yourself. ‘Asking ChatGPT’ to spit out a writing piece for you is not writing, and you are not a writer for doing so. Using Midjourney or whatever to generate you a picture does not make you an artist. You are only doing yourself a disservice by relying on these tools.
I'll probably add more to this in time, thoughts are hard and I'm tired.
25 notes
·
View notes
Text






Day 6: 29th March
This coming week going to be specially challenging for me I know… I’ve a bad feeling … just thinking about gritting my teeth through this and survive! I deep cleaned my house yesterday and cooked a large batch of food… I need to call my grandma tho… also I want to rant about the absolute shit state of YA novels these days! WTF is up with all ai covers and the actual story reads like written by ChatGPT … absolute garbage! Last night I opened a romance ya fantasy and I’m so pissed! Moving forward I think I’d avoid books released after ai came about!
music here
Research work 2P
Housekeeping tasks and self care
Mental health check _journal
Call grandma
Go On a long walk
Read 100 pages (non-academic)
#studyblr#stem academia#100 days of productivity#women in stem#study space#study motivation#study blog#studyspo#realistic studyblr#chaotic academia#academic validation#post grad life#graduate school#gradblr#grad student#engineering college#stemblog#stemblr
25 notes
·
View notes