#Skills required for artificial intelligence
Explore tagged Tumblr posts
Text
Skills Required for Artificial Intelligence Success: Your Guide
Artificial Intelligence skills encompass machine learning, deep learning, NLP, statistics, Python programming, and TensorFlow proficiency. Expertise in data preprocessing, algorithm creation, and ethical awareness to address bias is vital. Problem-solving, adaptability, and effective communication are crucial, along with navigating big data and cloud platforms. This multidisciplinary field thrives on continuous learning, fostering innovation to engineer intelligent systems that reshape industries and society at large.
#Skills required for artificial intelligence#Skills required for AI#Skills for artificial intelligence#Skills for AI#Artificial intelligence course Delhi#AI course Delhi#SoundCloud
0 notes
Text
the argument amongst creatives, novice to skilled for automaton writing as the ("evil apparent...")
(robot voice,) would you like to play a game? I’m going to post three paragraphs below this sentence… one written by ChatGPT, one by Compose, a microsoft app, and one written by an actual human. after i’m done, i’ll tell you which is which. it might just suck you into the candy land brain dead world of total shills. test case 1. Mechanical keyboards are not just a tool for typing, they are a…

View On WordPress
#artificial intelligence#cost of living#cryptic industry#dreamers#fledgling#hysteria#imposters#Mcdonalds#no degree#sacrifice#scams#short commute#short ride#skills required#take home pay
0 notes
Video
youtube
They explore the impact of AI on marketing, the changing job landscape, and the importance of balancing AI with human skills. They also provide valuable insights on how to prepare and educate yourself to stay ahead in the AI-powered marketing arena. For more click here
#youtube#artificial intelligence#google vs chat gpt#google ai#what is chatgpt#how to use google bard ai chatbot#ai for business#how to use ai in business#unlock the power of ai#how to use ai in your business#ai for content creation#ai for content creators#ai content#ai content creation dominance#open ai#ai tools#what you need to know#ai skills#ai skills required#ai skills for the future#ai skills for digital marketers#ai and digital marketing
0 notes
Text
Generative AI Policy (February 9, 2024)
As of February 9, 2024, we are updating our Terms of Service to prohibit the following content:
Images created through the use of generative AI programs such as Stable Diffusion, Midjourney, and Dall-E.
This post explains what that means for you. We know it’s impossible to remove all images created by Generative AI on Pillowfort. The goal of this new policy, however, is to send a clear message that we are against the normalization of commercializing and distributing images created by Generative AI. Pillowfort stands in full support of all creatives who make Pillowfort their home. Disclaimer: The following policy was shaped in collaboration with Pillowfort Staff and international university researchers. We are aware that Artificial Intelligence is a rapidly evolving environment. This policy may require revisions in the future to adapt to the changing landscape of Generative AI.
-
Why is Generative AI Banned on Pillowfort?
Our Terms of Service already prohibits copyright violations, which includes reposting other people’s artwork to Pillowfort without the artist’s permission; and because of how Generative AI draws on a database of images and text that were taken without consent from artists or writers, all Generative AI content can be considered in violation of this rule. We also had an overwhelming response from our user base urging us to take action on prohibiting Generative AI on our platform.
-
How does Pillowfort define Generative AI?
As of February 9, 2024 we define Generative AI as online tools for producing material based on large data collection that is often gathered without consent or notification from the original creators.
Generative AI tools do not require skill on behalf of the user and effectively replace them in the creative process (ie - little direction or decision making taken directly from the user). Tools that assist creativity don't replace the user. This means the user can still improve their skills and refine over time.
For example: If you ask a Generative AI tool to add a lighthouse to an image, the image of a lighthouse appears in a completed state. Whereas if you used an assistive drawing tool to add a lighthouse to an image, the user decides the tools used to contribute to the creation process and how to apply them.
Examples of Tools Not Allowed on Pillowfort: Adobe Firefly* Dall-E GPT-4 Jasper Chat Lensa Midjourney Stable Diffusion Synthesia
Example of Tools Still Allowed on Pillowfort:
AI Assistant Tools (ie: Google Translate, Grammarly) VTuber Tools (ie: Live3D, Restream, VRChat) Digital Audio Editors (ie: Audacity, Garage Band) Poser & Reference Tools (ie: Poser, Blender) Graphic & Image Editors (ie: Canva, Adobe Photoshop*, Procreate, Medibang, automatic filters from phone cameras)
*While Adobe software such as Adobe Photoshop is not considered Generative AI, Adobe Firefly is fully integrated in various Adobe software and falls under our definition of Generative AI. The use of Adobe Photoshop is allowed on Pillowfort. The creation of an image in Adobe Photoshop using Adobe Firefly would be prohibited on Pillowfort.
-
Can I use ethical generators?
Due to the evolving nature of Generative AI, ethical generators are not an exception.
-
Can I still talk about AI?
Yes! Posts, Comments, and User Communities discussing AI are still allowed on Pillowfort.
-
Can I link to or embed websites, articles, or social media posts containing Generative AI?
Yes. We do ask that you properly tag your post as “AI” and “Artificial Intelligence.”
-
Can I advertise the sale of digital or virtual goods containing Generative AI?
No. Offsite Advertising of the sale of goods (digital and physical) containing Generative AI on Pillowfort is prohibited.
-
How can I tell if a software I use contains Generative AI?
A general rule of thumb as a first step is you can try testing the software by turning off internet access and seeing if the tool still works. If the software says it needs to be online there’s a chance it’s using Generative AI and needs to be explored further.
You are also always welcome to contact us at [email protected] if you’re still unsure.
-
How will this policy be enforced/detected?
Our Team has decided we are NOT using AI-based automated detection tools due to how often they provide false positives and other issues. We are applying a suite of methods sourced from international universities responding to moderating material potentially sourced from Generative AI instead.
-
How do I report content containing Generative AI Material?
If you are concerned about post(s) featuring Generative AI material, please flag the post for our Site Moderation Team to conduct a thorough investigation. As a reminder, Pillowfort’s existing policy regarding callout posts applies here and harassment / brigading / etc will not be tolerated.
Any questions or clarifications regarding our Generative AI Policy can be sent to [email protected].
2K notes
·
View notes
Text
The programmer Simon Willison has described the training for large language models as “money laundering for copyrighted data,” which I find a useful way to think about the appeal of generative-A.I. programs: they let you engage in something like plagiarism, but there’s no guilt associated with it because it’s not clear even to you that you’re copying. Some have claimed that large language models are not laundering the texts they’re trained on but, rather, learning from them, in the same way that human writers learn from the books they’ve read. But a large language model is not a writer; it’s not even a user of language. Language is, by definition, a system of communication, and it requires an intention to communicate. Your phone’s auto-complete may offer good suggestions or bad ones, but in neither case is it trying to say anything to you or the person you’re texting. The fact that ChatGPT can generate coherent sentences invites us to imagine that it understands language in a way that your phone’s auto-complete does not, but it has no more intention to communicate. It is very easy to get ChatGPT to emit a series of words such as “I am happy to see you.” There are many things we don’t understand about how large language models work, but one thing we can be sure of is that ChatGPT is not happy to see you. A dog can communicate that it is happy to see you, and so can a prelinguistic child, even though both lack the capability to use words. ChatGPT feels nothing and desires nothing, and this lack of intention is why ChatGPT is not actually using language. What makes the words “I’m happy to see you” a linguistic utterance is not that the sequence of text tokens that it is made up of are well formed; what makes it a linguistic utterance is the intention to communicate something. Because language comes so easily to us, it’s easy to forget that it lies on top of these other experiences of subjective feeling and of wanting to communicate that feeling. We’re tempted to project those experiences onto a large language model when it emits coherent sentences, but to do so is to fall prey to mimicry; it’s the same phenomenon as when butterflies evolve large dark spots on their wings that can fool birds into thinking they’re predators with big eyes. There is a context in which the dark spots are sufficient; birds are less likely to eat a butterfly that has them, and the butterfly doesn’t really care why it’s not being eaten, as long as it gets to live. But there is a big difference between a butterfly and a predator that poses a threat to a bird. A person using generative A.I. to help them write might claim that they are drawing inspiration from the texts the model was trained on, but I would again argue that this differs from what we usually mean when we say one writer draws inspiration from another. Consider a college student who turns in a paper that consists solely of a five-page quotation from a book, stating that this quotation conveys exactly what she wanted to say, better than she could say it herself. Even if the student is completely candid with the instructor about what she’s done, it’s not accurate to say that she is drawing inspiration from the book she’s citing. The fact that a large language model can reword the quotation enough that the source is unidentifiable doesn’t change the fundamental nature of what’s going on. As the linguist Emily M. Bender has noted, teachers don’t ask students to write essays because the world needs more student essays. The point of writing essays is to strengthen students’ critical-thinking skills; in the same way that lifting weights is useful no matter what sport an athlete plays, writing essays develops skills necessary for whatever job a college student will eventually get. Using ChatGPT to complete assignments is like bringing a forklift into the weight room; you will never improve your cognitive fitness that way.
31 August 2024
106 notes
·
View notes
Text
just read that article from new york magazine, "Everyone Is Cheating Their Way Through College - ChatGPT has unraveled the entire academic project."
didn't reveal anything new to me about the use and functioning of the plagiarism-grown, glorified auto-predict, language models that were rolled out so irresponsibly it means now anyone can waste water instead of their own time and effort. but was still fascinating to read, in a bleak way.
it's so interesting because cheating and corner cutting will always exist in education, whether out of desperation or laziness, it will always be there. but by university it truly is wild how many people are not actually there to learn, because at that point if you have a program do all your work for you you are fully not there to learn so why waste your time and money playing pretend at a degree. a degree you aren't qualified for because you did not do enough.
we aren't in a post-capitalist universal basic income world where the idea of a few individuals lightly supervising automation is feasible. the technology is not there and the culture and economic stability is not there. so when a professor in the article reasons to students “you’re not actually anything different than a human assistant to an artificial-intelligence engine, and that makes you very easily replaceable. Why would anyone keep you around?” that is not hypothetical. and in terms of the degrees just because the on paper grade says you passed doesn't mean you passed it means you curated automated responses that pass with no actual guarantee of comprehension or retention of information on your part.
and there are tools and templates and minor automations that can be used to supplement your own efforts! they take longer but not that significantly, and more importantly they are less likely to impede the actual practice of learning to implementation.
that's what a lot of people who cheat or use these tools in this way seem to miss.
let me pull out three paraphrased statements of possible justifications from this article:
The education system is flawed
These exercises are irrelevant
I'm bad at organisation
these are all experientially true to my experience of education at various points. and the first point exacerbates issues with 2 and 3 to where students can feel overwhelmed or underprepared or frustrated for various reasons. however where i differ personally from the choice making of these students, is that while i never had access to such a powerful tool i still never chose to cheat or cut corners with things like chapter summaries instead of reading a book, or getting someone else to write for me, or any other obvious forms of cheating/plagiarism.
and the reason for this is not lack of frustration or feelings of antagonism towards the system or confusion over content or lack of organisation skills (all issues i had). it's that throughout my education, i am talking back to primary school, i always tried to figure out WHY we were doing the work assigned to us. what in our studies is it trying to get us to engage with, what methods does it force us to put into use to communicate that knowledge, and how much of the information have we comprehended and retained. some assignments are bad at the execution of these goals but if you can see what the goals are you can still benefit from attempting to achieve them while meeting the requirements enough to pass. IMPORTANTLY the process of doing this frustrating and often inefficient process helps not just critical thinking skills but also is how you actually learn things.
no one else can know stuff for you. it makes sense to outsource a basic sum to a calculator app on your phone, but this means you are not a mathematician. if you use a chapter by chapter summary to write a book report you have not read that book. if you read the wikipedia article for a movie you have not watched that movie. all of these are more verifiable sources of information than language models.
if you get a transcript of a lecture you did not attend and use a chatbot to make notes for you then you did not attend that class- if you read the transcript and take notes and then use the chatbot and compare the difference at least then you used your capacity for thought to process the information and assess it through comparison.... but it would be better to find a classmate and compare notes with a peer so you both have the opportunity to not only check how well you understood the lecture/refresh the information covered, but also a much lower stakes chance to try out communication skills than the group assignments and oral presentations often assigned for this purpose. and on top of that you get to socialise and network with someone in your field of study in a way that benefits both of you.
i'm not even against the use of machine learning models generally, i think they are useful in a repetitive task automation and data scanning context. but why are we delegating things like Knowing Stuff and Human Connection to the 1 and 0 machine that might as easily sell our info as have it leaked to hackers. what kind of cyberpunk surveillance dystopia are we shrugging lazily into? you do not have to pay all that money to pretend to be a competent professional. and if that sounds harsh it's because it is. there are enough scammers and barely qualified people succeeding in this world.
you do not have to dedicate your life to labours that you are not capable of, at the very least be honest with yourself of your own capacity for thought and action. genuinely try to figure out if you are using this technology because of a 'can't' or a 'won't'
it's not a tool if it knows more than you- it's a tool if you could do the job without it.
24 notes
·
View notes
Text
When I read the news, I swear sometimes I'm either not understanding something really important, or this whole fucking thing is a lie.
Okay, so top of the article says that Apple will open "a new manufacturing factory in Texas over the next four years."
The article then reports this in the third paragraph:
The iPhone maker’s announcement underscores how tech giants are trying to forge a closer relationship with President Trump as his second administration imposes new tariffs on China — where Apple manufactures its products — and shapes policies on artificial intelligence.
Okay...
Then two more paragraphs down from that, after more discussion of the impact of tariffs on China and meeting with T at the WH, it says, "workers at the factory in Texas will produce servers for Apple Intelligence."
...okay... so they're not actually moving any manufacturing of their products to the United States. They're manufacturing servers for their new AI product.
Then it says this:
The 20,000 new jobs will mostly focus on research and development, silicon engineering, AI and machine learning, the Cupertino, Calif., company said.
So... they're not manufacturing jobs?
Then it says that Apple will expand in other states as well:
Those expansion plans include investments in data centers, its facilities and skills development for students and workers. At a manufacturing facility in Arizona, Apple said, it will spend heavily to produce advanced silicon that is used in its devices. In Detroit, the company said, it’s opening a manufacturing academy that will offer free courses online and in person. Apple engineers will team up with university experts to help small and medium-sized businesses implement AI and manufacturing methods.
So... the only reference to actually manufacturing an item or product is in Arizona where they're manufacturing "advanced silicon." Everything else is data centers, training centers, and attempts to get other businesses to implement its AI.
To be clear, iPhones, computers, airpods, and all the other ubiquitous Apple devices will continue to be manufactured outside of the United States. No movement there, despite the tariffs. So why this headline? Why does this article spend three paragraphs mentioning the tariffs?
On the one hand, shame on me for still subscribing to the LA Times and reading this regurgitated press release posing as an article. On the other hand, is it me...? Like... wtf? What am I not getting here?
I mean, I'm not saying that research and engineering jobs are somehow less valuable than manufacturing jobs, I'm just saying we are constantly being sold a total lie about companies making a manufacturing investment in the United States. It's all just AI data centers. That's it. That does not require a significant number of skilled manufacturing workers. It's just going to be empty towns. Empty towns with huge warehouses.
I just think the whole article is so disingenuous. I'm embarrassed that this stands for journalism, I'm embarrassed thinking about the people who will read the headline and think 'Oh, nice!' Especially to the extent that it implies directly states that the tariffs are good or successful for AMERICAN WORKERS. All of this was re-printed by the LAT with no questioning or skepticism or additional clarity added.
I'm just so fucking over it.
24 notes
·
View notes
Text
I've noticed an uptick in interest in my (joke) AU, so I figured I'd finalize everyone's designs.
Simplified Overview:
Set in a universe where all living alicorns were killed in a war, a program was started to create one through artificial means.
AM/Mindful Ally- A wickedly clever Earth Pony, brought in during the last stages of the experiment. The only sucess, mostly due to his violent take over. By some small miracle, they'd managed to restrain him before all hell broke loose. This did very little to save them, as his magic was incredibly powerful. All it really did was prevent him from becoming mobile.
Terminus Theory- The head of the project, and one of the final students of Twilight Sparkle before her death.
Egret Song- A specialist in chemistry, as well as biomechanics. She was largely involved in the machines that enabled the creation of AM.
Gator Tracks- A psychiatrist who helped encourage subjects to sign up for the program. He directly suggested Mindful Ally to Terminus Theory, due to his intelligence.
Neocortex- A surgeon who was instrumental in the project, with a deeply questionable history. Had been locked up in Tartarus before his skills were all but required.
Bunsen Burner- The scientist that came up with the idea for the project, along with Terminus Theory. He was on par with AM's intelligence, before the alicorn transformed him into a diamond dog. (Mostly.)
#my little pony friendship is magic#mlp#i have no mouth and i must scream#ihnmaims#allied mastercomputer
21 notes
·
View notes
Text
MIP school uniform ref (and some notes about the MIP Academy AU under the cut.)
- Instead of the Development Center, they attend a more organized simulacrum of a school. They learn valuable skills to help them become better Groupmates, with the coursework often dictated by their Local Group Assignments. They can also study subjects for the sake of intellectual curiosity if they have time. Once they have completed their required courses as requested by their Local Groups, they "graduate" and reach their Local Group Integration.
- Director is the Principal
- SSR helps teach
- They hire various professors to teach at the school, though much of the coursework is handled by SSR and NCTG. technology allows for complex artificial intelligence programs to offer instruction.
- Implements are often called back to the school, but they're not forced to keep taking classes.
- Assistants serve as the student council and are not allowed to graduate until the Project is over. They also help teachers and help keep other students in line.
- in more silly versions we could have SSR's puppet like. physically show up so he can wear stupid outfits or whatever idk.
- tbh idk what else to do for the teachers youll have to use ur imagination
- sweaters and stuff are allowed. do whatever you want with shoes and socks as long as it isnt too disruptive. (if umbra and sls are watching)
- thorns is also the disciplinary committee and she might dress code you . idk what the dress code is but id imagine its wearing the standard uniform without too much modification.
- i don't remember anything else rn okay cool
24 notes
·
View notes
Text
Lessons in Story: Artificial Intelligence
Artificial intelligence is not an element of story, and yet here we are.
I'm aware that AI is bad for the environment. So's tumblr. That's all true. I'm also aware that AI scrapes copyrighted material like google does. I'm aware of how it steals art for its knowledge base without compensating artists and uses is as a model and replacement for skills. That's bad. I'm not going to address any of that here.
I have been observing how people talk about using AI in various parts of their writing process at the same time as I'm been trying to understand my own process and the obstacles I'm facing, and these two topics have oddly collided.
As I've said previously, my background is in some kind of woo woo where narrative comes out in one whole piece. So the fact that writing is many different and iterative pieces is something I had to figure out in my own bizarre way, but at the moment I now understand the basic process to be in these four general stages:
dreaming/planning (coming up with characters, ideas, goals, worlds, etc.)
outlining (not to say that this isn't many sub-stages, all of these steps are big catagories)
writing (actually putting words into sentences so your story exists)
Editing (revising, restructuring, polishing, etc.)
Are there more steps that I'm not accounting for? Those are the stages as I understand them. You can move back and forth through these stages throughout the process, so it's not necessarily linear, though it could be. For me, the key has been embracing the fact these are all radically different activities that require a completely different headspace, different skills, sometimes different tools, and a different perspective on narrative. That has been freeing revelation, because I was trying to do most of it at the same time.
But here's what else I've learned:
Dreaming/planning: this is a zero consistency space when it comes to how close or how far away you are from your protagonist. Are you feeling what they feel, or are you 30,000 feet up looking at the task they have in front of them and the path they're going to take? Or are you somewhere in between? Kind of all of the above at different points.
Outlining: in my experience, this can and should include emotional through lines, but outlining usually focuses on the 30,000 foot view. I have personally never written an outline that didn't miss critical details because of the 30,000 foot gap between me and the protagonist when I outline.
Writing: this seems like the very closest and most intimate you get with your story and your protagonist, right? This is where you live through it with them in extreme detail. There is no distance between you and them, you have to use a telescope to see 30,000 feet up. I find I have to revise my outline in small ways because I often underestimate or overestimate what something's going to feel like on the ground. This is like a micro-discovery phase: not plot discovery, emotional and intimate detail discovery.
Editing: I'm not an expert at this, but so far I feel like it goes back to being extremely inconsistent. It's either very close in a different way, or 30,000 feet up, or various in-between levels, depending on the type of editing or revision. And sometimes it's none of those, it's completely outside looking at how many times you use the word "feel" or whether your verbs and nouns agree.
Right. So people try to insert AI to do the graft for one or more of these stages.
AI in stage 1: I've seen some folks talk about using AI to get ideas for stories. I don't understand that, ideas are the easiest part of this process, as far as I can tell. Life's a rich pageant, maybe that's not universally true. Now, having AI to help you refine an idea, I can see that. Especially if you ask it to point out tropes and cliches as you go. Is that bad? Is that cheating? I dunno.
AI in stage 2: I've never seen anyone say they do this. If you have an amazing and complete story idea and you want to shaped into a 3 or 5 act structure, or a hero's journey, etc. I'm sure AI could do that, but that's mainly just typing. That's like AI as workbook. Is that cheating? I dunno. Does an AI generated outline help you? Or do you just skip the thinking that would have created the details of your story? Hard to say.
AI in stage 3: The wildest version of using AI in the creation of fiction, and there are whole subreddits for it. This is the people who are constructing novels scene by scene by telling AI to write it for them to their specifications and then "heavily editing" the result. So they are ostensibly doing stage 1, 2, and 4 themselves, and are outsourcing stage 3, the hard graft. Though I'd be very surprised if they aren't also using AI for stage 4, but let's assume they aren't.
Stage 3 is the only part of writing process that is protected by copyright, so it's a weird one to outsource. It's also the stage, in my experience, where you do micro-discovery, the in-the-moment scene details and the actual, living emotional experience of your story that you can't completely capture in outline. So if you just animate your outline without living through the story with your characters, it's always going to feel emotionally 30,000 feet in the air, I think. Right? If you feed AI an outline, that's what you'd get. i think doing this is just avoiding doing the most intimate and immediate discovery process of creating a story, and I don't think that serves the story or the writer (or "writer").
I'm intrigued that people think you can do this and it makes sense. You'd have to believe that the writing process is simply describing the contents of your outline, but I don't think that's true. It's like trying to get from the twelfth floor to the first floor by skipping the stairs, the elevator or the escalator and just leaping into the air assuming you'll land just fine because those intermediary systems are just time-wasters anyway.
I've read some arguments that using AI for stage 3 is something people with disabilities need to get their stories out into the world. As a neurodivergent person, I think that's short-sighted and is a disservice to those stories. I'm pretty sure it's just skipping the work of living through the emotional through line of the story and just not making all the little decisions and constructing the tiny details that go into the telling of a story. That's a heck of a missing staircase. Outlines aren't stories. Skipping the writing part means you're missing 2/3rds of the discovery, and therefore 2/3rds of the richness and depth of the story. How does that serve disabled voices? I don't buy it.
AI in stage 4: the one that looks innocuous but is actually dangerous. Dumping your work into AI and having it fix everything for you. This is a bad idea. Dump your work in there if you want to, but have it tell you what it's finding that needs adjustment so you can make decisions about it yourself. Copying and pasting out of an AI engine means you aren't making decisions about it, you're deferring decisions to a machine. That's the fastest way possible to erase your own voice. I can see getting it to flag things it has questions about, but taking AI advice on your writing is way too trusting.
I think this is especially dangerous for writers who don't have confidence in their own voice. AI's voice may seem like a better chose to them, and that's really sad.
I have more to say about AI, but this is more than enough for now.
22 notes
·
View notes
Note
I'd like to know how a Porygon-Z would do as a pet. My favorite little creature :)

As with this species' pre-evolutions, porygon-z are a curious pet candidate. These pokémon are artificial, mostly digital beings, which makes them about as rare as they are behaviorally peculiar. If you do manage to adopt a porygon-z (or provide your porygon2 a Dubious Disc to allow them to evolve into one), I’m afraid there aren’t a lot of resources out there to help you with caring for them. It takes some pretty advanced programming to bring about a porygon-z, so finding one to adopt can be pretty expensive or will require some advanced computer engineering skills. This blog’s primary source of information, the pokédex, has hardly anything to say about this species. Because of this, this post may feel a little… cobbled together… but I’ll do my best to provide you with all the speculative information I can. The bottom line, though, is that porygon-z might make good pets for some owners, but their unpredictable behavior and abilities may make them more than most can handle.
Let’s start with the easy stuff. Porygon-z are a good size for a house pet. They’re far from too heavy, and their ability to levitate makes it easy for them to comfortable get around, even in smaller living spaces. You’re probably already wondering, though: how big a risk is there of a porygon-z wandering off into other forms of space. Porygons, after all, have the fascinating ability to traverse digital space, which can cause some issues when it comes to owning one as a pet (see the porygon post, linked at the bottom of this one). Porygon-z are created using porygon2s as a base, meaning that many of their programmed behaviors and abilities can be inferred by looking at the information we have about their predecessors. These related pokémon are so similar in some ways, in fact, that many pokémon scholars don’t even consider porygon-z an entirely new evolution of pokémon (Shield). It is fair to assume that porygon-z have a similar ability to traverse cyberspace like porygons, but it doesn’t end there. Porygon-z seems to have been designed to traverse and work in even stranger dimensions of reality, described vaguely in the pokédex as “alien dimensions” (Platinum, HeartGold/SoulSilver). This was supposed to make them a “better” and “more advanced pokémon” (Diamond/Pearl, Scarlet), an absurdly subjective and frankly insulting goal (porygons and porygon2s are perfect the way they are). Whether or not this programming worked seems to be up in the air, but no matter what you will want to take precautions to avoid them wandering off into cyberspace. This is, of course, easier said than done. When it comes to porygons, which have very predictable programming, there isn’t a lot of risk of them popping off without permission. Porygon-z, by contrast, are quite erratic.
As a result of the programming they’ve been given in order to turn them into interdimensional travelers, porygon-z are unpredictable both in their movements and their behavior (Diamond/Pearl, Sun, Moon). This may make porygon-z difficult to train and to contain, and could even make them dangerous. Like their pre-evolutions, these pokémon are capable of using some pretty gnarly moves, like Tri-Attack, Double-Edge, and Hyper Beam, which could easily prove lethal in the wrong context. Given the lack of information about this species’ behavior beside it being “odd”, it is difficult to recommend them to someone unless they are aware that caring for them might look different every day.
On a positive note, these pokémon have a pretty good ease of care. Porygon2s, which porygon-z get most of their programming and physical “biology” from, can survive in the vacuum of space, after all! The problems with caring for a porygon-z don’t lie so much with a danger to them but to a danger to you and the risk of their getting lost. They are also, mostly likely, highly intelligent and social, if those parts of their porygon2 programming remain. They are likely much more adaptive than porygons, for better and for worse.
All-in-all, it is about as hard to recommend porygon-z as a pet as it is to explain exactly why I can’t. While their needs are very simple, their formidable ability to cause harm and run away from home, combined with their notoriously erratic behavior, makes them a pet that only the most experience pokémon (and preferably, porygon) owners to handle.
The Porygon Post:
51 notes
·
View notes
Text

Exclusive Interview with Ljudmila Vetrova- Inside Billionaire Nathaniel Thorne's Latest Venture
CLARA: I'm here with my friend Ljudmila Vetrova to talk about the newest venture of reclusive billionaire Nathaniel Thorne- GAMA. Ljudmila, could you let the readers in on the secret- what exactly is this mysterious project about?
LJUDMILA: Sure, Clara! As part of White City's regeneration programme, Nathaniel has teamed up with the Carlise Group to create a cutting-edge medical clinic like no other. Introducing GAMA– a private sanctuary for the discerning, offering not just top-notch medical care and luxurious amenities, but also treatments so innovative they push the envelope of medical science.
CLARA: Wow! Ljudmila, it sounds like GAMA is really taking a proactive approach to healthcare. But can you tell us a bit more about the cutting-edge technology behind this new clinic?
LJUDMILA: Of course! Now, GAMA is not just run by human professionals, it's also aided by an advanced AI system known as KAI – Kronstadt Artificial Intelligence. KAI is the guiding force behind every intricate detail of GAMA, handling everything from calling patients over the PA system to performing complex surgical procedures. Even the doors have a touch of ingenuity, with no keys required- as KAI simply detects the presence of an RFID chip embedded in the clothing of both patients and staff, allowing swift and secure access to the premises. With KAI at the helm, patients and staff alike benefit from streamlined care.
CLARA: A medical AI? That's incredible! I've heard much of the medical technology at GAMA was developed by Kronstadt Industries and the Ether Biotech Corporation, as a cross-disciplinary partnership to create life-saving technology. Is that true?
LJUDMILA: It sure is, Clara! During the COVID-19 pandemic, GAMA even had several departments dedicated to researching the virus, assisting in creating a vaccine with multiple companies. From doctors to nurses and administrative personnel, the team at GAMA is comprised of skilled individuals who are committed to providing the best care possible. All of the GAMA staff are highly educated with advanced degrees and have specialized training in their respective fields.
CLARA: Stunning! Speaking of the GAMA staff, rumors surrounding the hiring of doctors Pavel Frydel and Akane Akenawa have made headlines, with claims that they supposedly transplanted a liver infected with EHV, leading to the unfortunate demise of the patient shortly after. Such allegations might raise questions about the hospital's staff selection process and adherence to medical guidelines and ethical standards. Do you have any comment on these accusations, Ljudmila?
LJUDMILA: Er- well, Clara, the management of GAMA Hospital has vehemently denied all allegations of unethical practices and maintains that they uphold the highest standards of care for all patients. They state that they conduct thorough background checks on all staff members, including doctors, and that any individuals found to be involved in unethical practices are immediately removed from their position. The hospital has a strict code of ethics that all staff must adhere to, and any violations are taken very seriously. In response to the specific claims about the transplant procedure, GAMA states that they are investigating the matter in cooperation with the relevant authorities.
CLARA: Wonderful! I'm afraid that's all we have time for at the moment- lovely chatting with you again, Ljudmila!
@therealharrywatson @artofdeductionbysholmes @johnhwatsonblog
33 notes
·
View notes
Text
Specs
Lore ranmbling under the cut
Biogel was developed approximately 30 years ago by Lakeview Industries, and has since revolutionized artificial sapient beings.
It's comprised of a heavily genetically modified colony of plankton suspended in a gelatinous concoction of chemicals. It serves as the brain and battery for gel-based intelligences (GBI for short).
Early GBI were, for the most part, giant tankers of biogel stored in warehouses with maybe a couple read-out displays attached for research. Even today, biogel is not nearly as dense or complex as human brain matter, and as such far more of it is required to create a sapient consciousness. Model 1151, used in this example, is approximately 50% biogel storage by mass.
Modern GBI are much more streamlined, and now feature a multi-node system to separate brain processes for parallel processing and redundancy. Different biogel 'nodes' with slightly different composition are split throughout the chassis, connected through gel-veins. Nearly all modern GBI use a 4-node construction. The nodes are as follows;
- Primary Node: Best thought of as the prefrontal cortex of a GBI. It performs the GBI's conscious thought, and holds short-term memory. It also performs head movement and processes audiovisual sensory input.
- Secondary Nodes: Each major limb is assigned a smaller secondary node, which controls that limb's motor skills, as well as processing tactile sensory input and special positioning (proprioception). Secondary nodes don't do nearly as much 'thinking' as the primary node, but they handle some unconscious reflex.
- Auxiliary Node-Clusters: Commonly referred to as Aux Nodes. Aux nodnes are unthinking, solely used for high-fidelity sensory input. They are primarily placed on palms and soles of the feet. Many models will also use aux nodes for more specific purposes- Model-1151, for example, will flash its 'antenna' on and off to communicate over large distances via morse code.
- Tertiary Node: The tertiary mode is also unthinking, and is used for storage. It contains the GBI's long-term memory, and while somewhat poorly understood, seems to contain its 'personality'. A GBI can be fully regenerated as long as its tertiary node remains intact, albeit with some memory loss.
17 notes
·
View notes
Note
"chatgpt writing is bad because you can tell when it's chatgpt writing because chatgpt writing is bad". in reality the competent kids are using chatgpt well and the incompetent kids are using chatgpt poorly... like with any other tool.
It's not just like other tools. Calculators and computers and other kinds of automation don't require you to steal the hard work of other people who deserve recognition and compensation. I dont know why I have to keep reminding people of this.
It also uses an exorbitant amount of energy and water during an environmental crisis and it's been linked to declining cognitive skills. The competent kids are becoming less competent by using it and they're fucked when we require in-class essays.
Specifically, it can enhance your writing output and confidence but it decreases creativity, originality, critical thinking, reading comprehension, and makes you prone to data bias. Remember, AI privileges the most common answers, which are often out of date and wrong when it comes to scientific and sociological data. This results in reproduction of racism and sexist ideas, because guess whats common on the internet? Racism and sexism!
Heres a source (its a meta-analysis, so it aggregates data from a collection of studies. This means it has better statistical power than any single study, which could have been biased in a number of ways. Meta analysis= more data points, more data points= higher accuracy).
This study also considers positives of AI by the way, as noted it can increase writing efficiency but the downsides and ethical issues don't make that worthwhile in my opinion. We can and should enhance writing and confidence in other ways.
Heres another source:
The issue here is that if you rely on AI consistently, certain skills start to atrophy. So what happens when you can't use it?
Im not completely against all AI, there is legitimate possibility for ethical usage when its trained on paid for data sets and used for specific purpose. Ive seen good evidence for use in medical fields, and for enhancing language learning in certain ways. If we can find a way to reduce the energy and water consumption then cool.
But when you write essays with chatgpt you're just robbing yourself an opportunity to exercise valuable cognitive muscles and you're also robbing millions of people of the fruit of their own intellectual and creative property. Also like, on a purely aesthetic level it has such boring prose, it makes you sound exactly like everyone else and I actually appreciate a distinctive voice in a piece of writing.
It also often fails to cite ideas that belong to other people, which can get you an academic violation for plagiarism even if your writing isn't identified as AI. And by the way, AI detection software is only going to keep getting better in tandem with AI.
All that said it really doesn't matter to me how good it gets at faking human or how good people get at using it, I'm never going to support it because again, it requires mass scale intellectual theft and (at least currently) it involves an unnecessary energy expenditure. Like it's really not that complicated.
At the end of the day I would much rather know that I did my work. I feel pride in my writing because I know I chose every word, and because integrity matters to me.
This is the last post I'm making about this. If you send me another ask I'll block you and delete it. This space is meant to be fun for me and I don't want to engage in more bullshit discourse here.
15 notes
·
View notes
Text
I've been watching Vrains for the first time, and I finished season 1
some thoughts below (a lot. a lot of thoughts below)
Setting & Setup
The setting is a very smart choice, and the highlight of Vrains in my opinion.
The aesthetic is relevant, it integrates well with Konami marketing its mobile games, and it neatly deals with the problem every Yugioh series has to address of how to make consequences for duels that can be safely broadcast on children's television (aka the shadow realm / sent to the stars problem)
The virtual world opens up so much possibility in how to present things.
In the character and setting designs, obviously, but also, the duels have their own visual identity. They don't need to play cards on a duel disk or hold cards in their hand, cards will just appear and disappear in pixels (it also probably saves on their animation budget). The duel grid and other visualizations can show up as they're relevant. And seeing the character take gashes to their avatars is dramatic while still being kid-friendly.
Every Yugioh needs its peanut gallery to react to the duel and be explained to, and Vrains incorporates them the most easily of any series, since they don't need to be physically present.
Kill the frog and the pigeon though.
On the topic of duels, I like both link summoning and speed duels.
I like the spacial/positioning element of link summoning, it puts an additional layer on top of "little monsters make big monster."
And speed duels have all the advantages of turbo duels, allowing for dynamic action, visual metaphor for the tide of the duel, and the ability to change locations without sacrificing pacing, all while requiring less suspension of disbelief than card games on motorcycles. It's easy to take for granted, but the heist sequences wouldn't work at all if they had to be standing duels.
I also liked this about season 1 of 5Ds, where there's two modes of dueling (turbo duels and ground duels). The contrast between speed duels and master duels is fun.
Skills are cool too, though I feel like they could have designed better ones. Seeing Yusaku drop below 1000 got kind of predictable.
And finally, there is rich thematic potential in this kind of virtual setting. Themes about our relationship with social media, video games, artificial intelligence, and tech corporations are very relevant and have depth. I don't have the highest of hopes for Yugioh tackling them, based on how this first season has gone, but I'll withhold judgment on this until I'm finished.
Yusaku
I like Yusaku. He's blunt, but he's not edgy like I thought he would be. He's actually kind of nice toward Naoki when they meet. The three things tic is charming. I like his hacker deck, it's probably my favorite protagonist deck theme after Elemental Heroes.
Yusaku's problem is not that he's boring per se, but he isn't really put in any interesting situations in S1.
A lot of his duels are just him being challenged to a duel he has no interest in (Go, Blue Angel, Ghost Girl), he beats them, and they don't actually end up forming a relationship or establishing a dynamic. Because their characters don't have anything to do with each-other except mutually not wanting the bad guys to do bad things.
Yusaku gains allies, but he doesn't make friends. He starts off with his only real relationship being Kusanagi, and that doesn't really change by the end of the season (and his relationship with Kusanagi is not very developed either).
Now, there is a reason for this, which is I think the core of Yusaku's character in S1. It's that due to the traumatic event of his childhood, time has stopped for him. This is very real for victims of traumatic events, being unable to move forward in their lives, develop relationships or think about the future, because their minds are still stuck in the past. This is why Yusaku seeks his revenge. It's not revenge he's seeking, it's closure.
This is a theme that's worth exploring. The problem is that I don't really think they explore it, not sufficiently enough for me to give them credit. If they were exploring it, they could have shown Yusaku reckoning with the divide between him and others in a number of ways.
Most chiefly, by forcing him to make friends anyway. This is Yugioh goddamnit. The opportunity was right there with Naoki, but it's just played as a joke. Instead, most of this theme is squeezed in at the end of the final duel vs. Revolver, and without the proper build-up, the moment of Yusaku renouncing his revenge and reaching out to be friends with Revolver doesn't land nearly as strong as it could have.
If there is one relationship that Yusaku maybe develops though, it's...
Ai
The relationship between Yusaku and Ai should be what the show hinges on, based on the premise, Ai's status as the "partner," and glimpses I've seen of them through fandom.
Ai is the inciting incident of the story, his existence drives the plot forward, because Hanoi wants him, but Yusaku has him, and Ai doesn't want anything to do with either of them. This premise is gold. It's rife with dramatic potential. Ai is forced to work together with his captor. Yusaku is forced to work together with this goddamn annoying AI. They are both just trying to use the other, but end up developing a bond.
Or at least... that's what I think should have happened...
Very little happens between them in season 1, and it either goes nowhere, or comes out of nowhere. Ai tries to escape, but that thread is just dropped and forgotten. Various Hanoi guys hint that the Ignis can't be trusted, but it doesn't really faze Yusaku because he already doesn't trust Ai. The same thing happens when Revolver reveals that Ai is his counterpart from the Lost Incident and has known it this whole time.
There's only one turning point in their dynamic, which is in the second to last duel vs. Revolver, where Ai uses his body as a shield so that Yusaku can use Storm Access. And even then, Ai says it's because if Yusaku loses, Revolver will kill him. But that's been their entire dynamic for the season anyway? Why is this positioned as the emotional moment where they become partners?
By the end of season 1, they're... allies. The same as the rest of the characters on Yusaku's side. But if there was one character Yusaku should have made friends with, it's Ai. Especially if they are positioning for a humans vs. AI conflict.
His design is cute though.
Go
Go's problem is that he needs to be integrated into the story and cast. Aoi at least has a relationship with her brother and Ghost Girl. Go is connected to... some nameless orphan children, a nameless manager, and a childhood orphan friend who shows up for 5 seconds, is put into a coma in order for Go to have a motivation to duel Genome, and never appears again.
Go isn't a best friend character, and he's not a rival either. He's not even a friend character, period. He really just seems there to be a third duelist.
Does he even know about the Lost Incident, or why Playmaker is even fighting Hanoi? Go has no clue what the plot even is, how can he be involved in it? My guy is living in a different story.
It's a shame, because Go does have some interesting bits to his character. Being a charisma duelist is central to his character (unlike Aoi, whose relationship to charisma dueling seems to end at being a cute idol girl), which could have been used to explore the culture of Link Vrains and the performativity of online spaces.
This is tied to some kind of theme he has going on of dueling for others vs. dueling for yourself. It's brought up in contrast to Yusaku, and why he initially dislikes Playmaker. All of that could have been interesting, but it doesn't really get a full treatment.
Revolver
Revolver is fine as a season 1 antagonist. He's not really a character yet, but I'm interested in where they take him from here. His backstory is sympathetic honestly. It's a pretty familiar and tragic situation, where a child narcs on their parent, who isn't even a good parent, but then comes to regret it.
I also think Revolver is sympathetic because I would nuke the internet in a heartbeat.
His Link Vrains design is cool. Mirror Force is funny, so are the gun dragons. The final duel vs. Yusaku was sick honestly, I loved the extra extra link.
Anyway, I still enjoyed season 1 and think there's room to take a turn for the better. In my experience, there's two kinds of yugiohs, the ones that start off strong, and the ones that end strong. My suspicion is that Vrains is the latter.
On to season 2! Time to meet everyone's favorite Salad king :^)
Hm? Was there someone I missed?
ha ha...... you get your own post, Aoi.
30 notes
·
View notes
Note
hiiii i was just curious — as u mentioned you're in college and i'm not very educated about the system anymore — if you had any insight about lu's uni life! (i had to drop out after my first semester years ago for health issues, would love to go back and be in an academic environment again but.. sigh unfortunately it feels like mid 20's is too late to start again, along with the risk of not knowing what degree would be right and it being costly — ahh sorry i'm side tracking) i know he got his bachelors and masters in 4 years with honours (which sounds like a crazy feat in itself) but was also wondering how that fit in with him also being a counsellor (?) ta (?) at stanford! i saw someone say he did it for like a year (don't quote me on that) so was curious how that side of things work, and at what year into his degree he did it! was it part of a program that gave him credits he could transfer back to penn? or was it something that cut into his degree time and was mainly for experience? another thing i saw ppl talking about is the amount of time in which he completed his degree, i typically thought a bachelors is a 3-4 year mark and then masters is another year or 2 on top of that, so how did he complete both so quickly? i could be completely wrong here so sorry for all the questions, and thank u for being such a big sis like safe space!
Hello!
This is the best insight that I can give you from his uni life:
So, first off, yes, he was a counselor and also a TA while in college, and these two roles were two completely different experiences that he had while at UPenn and outside of his time while in school. He was a head counselor for the Stanford Pre-Collegiate Studies, an academic program that middle and high school students can attend to learn advanced coursework in different subjects. As a counselor, he supervised the residential aspect for students attending the program and taught them artificial intelligence during the summer of 2019 while on summer break; he did this during the summer after his junior (3rd) year and before the start of his last (4th) year at UPenn. This had no affiliation with him going to UPenn; he participated in this probably out of his own interest. So no, it was not part of a program that gave him credits. My educated guess on why he did this was to first gain more and broaden his leadership skills, explore more with his computer science background, and add an impressive experience to his resume and educational experience.
Then, between his second and third years of college, he was a teaching assistant for an actual class at UPenn for about a year and a half. Now, I don't know if he actually received credits towards his degree as a TA, but it is very common to earn college credit by assisting with teaching courses, depending on the institution and program you're a part of.
And then about his Bachelor's and Master's degrees—yes, typically, a Bachelor's degree takes about 4 years to complete. On the other hand, a Master's can take about 1-2 years, although it can vary based on different factors like the program requirements, the enrollment type, and how fast the program goes. I don't know what the Computer Science program at UPenn is like, but he probably completed his degrees that quick because of either an accelerated or a dual-degree program, where the curriculum of your field of study allows you to earn both degrees in a shorter time at an accelerated rate. Many universities offer this kind of program for a number of different studies, where you can earn both a Bachelor's and a Master's degree concurrently within 4 years.
A little side note: I'm sorry you dropped out early because of health issues. I know that wasn't the easiest decision to make, but know that you did the right thing by taking care of yourself first and foremost. Please understand that you are never too late to start all over again, and it's okay if you don't necessarily know that you want to study first—at some point, it will come to you when you see what you want to do and what matters the most to you. If this gives you any sign of reassurance, I did not go to college for my first degree until an entire year after graduating high school. I decided to take a gap year. I was severely ill with a virus, and I nearly fought my last year of high school being extremely sick. When I graduated high school, I realized then that I wasn't ready for that next transition—I realized that I needed to take care of myself first before starting that next part of my life. I'm glad I did because I was ready when I finally started college. I know many people who are older than me who have either started school later in life, whether to get their first ever degree or their next one, whether it was because of a job change, family life, or financial reasons. Remember, do what you want to do and what makes you content in your life, and don't think you have to compare yourself to another person—we all have different choices on what we want to achieve in life, which means we may have to go about them differently. That doesn't automatically mean you're on the wrong path! 🤍
11 notes
·
View notes