#data visualization training program
Explore tagged Tumblr posts
Text
Complete Data Visualization Training: Transform Data into Insights
Learn to create powerful, interactive dashboards with our data visualization training program. Gain practical skills in designing charts, graphs, and visuals that drive better business decisions. This course is perfect for anyone looking to master visual storytelling and data interpretation.
0 notes
Text
Data Visualization in SAS: Creating Impactful Dashboards
In today's data-driven world, the ability to visualize data effectively is more important than ever. Data visualization helps convert complex datasets into clear, actionable insights, enabling stakeholders to make informed decisions quickly. SAS programming offers robust tools for creating impactful dashboards that can transform raw data into compelling visual narratives.
The Role of Data Visualization Data visualization serves multiple purposes. It allows analysts to uncover patterns, trends, and anomalies in data, making it easier to communicate insights to non-technical stakeholders. In industries such as healthcare, finance, and marketing, effective data visualization can significantly influence strategic decisions, driving better outcomes.
Getting Started with Data Visualization in SAS
To create an impactful dashboard using SAS programming, follow these key steps:
1. Define Key Metrics: Before diving into the visualization process, identify the key performance indicators (KPIs) and metrics that matter most to your audience. Focus on data points that directly support the business objectives and will be most relevant to decision-makers.
2. Utilize SAS Visual Analytics: SAS Visual Analytics is a powerful tool that simplifies the process of creating interactive dashboards. Its drag-and-drop interface allows users to build reports without extensive coding knowledge. For those new to SAS, taking a SAS programming full course can provide a solid foundation in using these visualization tools effectively.
3. Choose the Right Visual Elements: Selecting appropriate visual elements is crucial for conveying information clearly. Use bar charts for categorical comparisons, line graphs for trends over time, and heat maps for density visualization. Each visual should serve a specific purpose and enhance understanding.
4. Incorporate Interactivity: Adding interactive features, such as filters and drill-down capabilities, allows users to explore data at different levels. This interactivity engages stakeholders, enabling them to derive their insights from the data.
5. Test and Iterate: Once your dashboard is created, seek feedback from users. Testing the dashboard’s usability and clarity can uncover areas for improvement. Iterative refinement based on user input ensures the final product meets the audience’s needs.
Best Practices for Dashboard Design
To maximize the effectiveness of your dashboards, consider the following best practices:
1. Simplicity is Key: Avoid overcrowding your dashboard with too much information. Focus on essential metrics and keep the design clean. A cluttered dashboard can confuse users and detract from the main insights you wish to convey.
2. Consistent Design Elements: Use a uniform color palette and consistent font styles to create a cohesive look. This consistency helps establish a professional appearance and enhances readability.
3. Focus on Storytelling: Data visualization is about storytelling. Arrange your visual elements to guide users through the insights logically. Begin with high-level metrics and allow users to drill down into more detailed views as needed.
4. Ensure Accessibility: Consider the diverse audience who will view your dashboard. Ensure that your visualizations are accessible to individuals with varying levels of data literacy. Use clear labels, legends, and descriptions to enhance understanding.
5. Monitor Performance: After deploying your dashboard, monitor its usage and performance. Collect feedback to identify areas for improvement and adapt the dashboard as necessary. Continuous enhancement will keep your visualization relevant and valuable. Learning Resources for SAS Visualization
For those eager to master data visualization in SAS, numerous resources are available. Online tutorials, such as a SAS tutorial focused on visualization techniques, can provide valuable insights into best practices and tool usage.
Conclusion In conclusion, data visualization is a powerful tool for communicating insights derived from complex datasets. SAS programming equips users with the capabilities to create impactful dashboards that enhance decision-making across various industries. By following best practices, utilizing available resources, and continuously refining your skills, you can transform data into compelling visual narratives that drive strategic outcomes. Investing time in learning SAS programming for data visualization and taking advantage of SAS online training will undoubtedly elevate your analytical capabilities and career prospects.
#sas programming tutorial#sas programming#sas programming full course#data visualization#learning sas#sas online training
0 notes
Text
Big Data vs. Traditional Data: Understanding the Differences and When to Use Python

In the evolving landscape of data science, understanding the nuances between big data and traditional data is crucial. Both play pivotal roles in analytics, but their characteristics, processing methods, and use cases differ significantly. Python, a powerful and versatile programming language, has become an indispensable tool for handling both types of data. This blog will explore the differences between big data and traditional data and explain when to use Python, emphasizing the importance of enrolling in a data science training program to master these skills.
What is Traditional Data?
Traditional data refers to structured data typically stored in relational databases and managed using SQL (Structured Query Language). This data is often transactional and includes records such as sales transactions, customer information, and inventory levels.
Characteristics of Traditional Data:
Structured Format: Traditional data is organized in a structured format, usually in rows and columns within relational databases.
Manageable Volume: The volume of traditional data is relatively small and manageable, often ranging from gigabytes to terabytes.
Fixed Schema: The schema, or structure, of traditional data is predefined and consistent, making it easy to query and analyze.
Use Cases of Traditional Data:
Transaction Processing: Traditional data is used for transaction processing in industries like finance and retail, where accurate and reliable records are essential.
Customer Relationship Management (CRM): Businesses use traditional data to manage customer relationships, track interactions, and analyze customer behavior.
Inventory Management: Traditional data is used to monitor and manage inventory levels, ensuring optimal stock levels and efficient supply chain operations.
What is Big Data?
Big data refers to extremely large and complex datasets that cannot be managed and processed using traditional database systems. It encompasses structured, unstructured, and semi-structured data from various sources, including social media, sensors, and log files.
Characteristics of Big Data:
Volume: Big data involves vast amounts of data, often measured in petabytes or exabytes.
Velocity: Big data is generated at high speed, requiring real-time or near-real-time processing.
Variety: Big data comes in diverse formats, including text, images, videos, and sensor data.
Veracity: Big data can be noisy and uncertain, requiring advanced techniques to ensure data quality and accuracy.
Use Cases of Big Data:
Predictive Analytics: Big data is used for predictive analytics in fields like healthcare, finance, and marketing, where it helps forecast trends and behaviors.
IoT (Internet of Things): Big data from IoT devices is used to monitor and analyze physical systems, such as smart cities, industrial machines, and connected vehicles.
Social Media Analysis: Big data from social media platforms is analyzed to understand user sentiments, trends, and behavior patterns.
Python: The Versatile Tool for Data Science
Python has emerged as the go-to programming language for data science due to its simplicity, versatility, and robust ecosystem of libraries and frameworks. Whether dealing with traditional data or big data, Python provides powerful tools and techniques to analyze and visualize data effectively.
Python for Traditional Data:
Pandas: The Pandas library in Python is ideal for handling traditional data. It offers data structures like DataFrames that facilitate easy manipulation, analysis, and visualization of structured data.
SQLAlchemy: Python's SQLAlchemy library provides a powerful toolkit for working with relational databases, allowing seamless integration with SQL databases for querying and data manipulation.
Python for Big Data:
PySpark: PySpark, the Python API for Apache Spark, is designed for big data processing. It enables distributed computing and parallel processing, making it suitable for handling large-scale datasets.
Dask: Dask is a flexible parallel computing library in Python that scales from single machines to large clusters, making it an excellent choice for big data analytics.
When to Use Python for Data Science
Understanding when to use Python for different types of data is crucial for effective data analysis and decision-making.
Traditional Data:
Business Analytics: Use Python for traditional data analytics in business scenarios, such as sales forecasting, customer segmentation, and financial analysis. Python's libraries, like Pandas and Matplotlib, offer comprehensive tools for these tasks.
Data Cleaning and Transformation: Python is highly effective for data cleaning and transformation, ensuring that traditional data is accurate, consistent, and ready for analysis.
Big Data:
Real-Time Analytics: When dealing with real-time data streams from IoT devices or social media platforms, Python's integration with big data frameworks like Apache Spark enables efficient processing and analysis.
Large-Scale Machine Learning: For large-scale machine learning projects, Python's compatibility with libraries like TensorFlow and PyTorch, combined with big data processing tools, makes it an ideal choice.
The Importance of Data Science Training Programs
To effectively navigate the complexities of both traditional data and big data, it is essential to acquire the right skills and knowledge. Data science training programs provide comprehensive education and hands-on experience in data science tools and techniques.
Comprehensive Curriculum: Data science training programs cover a wide range of topics, including data analysis, machine learning, big data processing, and data visualization, ensuring a well-rounded education.
Practical Experience: These programs emphasize practical learning through projects and case studies, allowing students to apply theoretical knowledge to real-world scenarios.
Expert Guidance: Experienced instructors and industry mentors offer valuable insights and support, helping students master the complexities of data science.
Career Opportunities: Graduates of data science training programs are in high demand across various industries, with opportunities to work on innovative projects and drive data-driven decision-making.
Conclusion
Understanding the differences between big data and traditional data is fundamental for any aspiring data scientist. While traditional data is structured, manageable, and used for transaction processing, big data is vast, varied, and requires advanced tools for real-time processing and analysis. Python, with its robust ecosystem of libraries and frameworks, is an indispensable tool for handling both types of data effectively.
Enrolling in a data science training program equips you with the skills and knowledge needed to navigate the complexities of data science. Whether you're working with traditional data or big data, mastering Python and other data science tools will enable you to extract valuable insights and drive innovation in your field. Start your journey today and unlock the potential of data science with a comprehensive training program.
#Big Data#Traditional Data#Data Science#Python Programming#Data Analysis#Machine Learning#Predictive Analytics#Data Science Training Program#SQL#Data Visualization#Business Analytics#Real-Time Analytics#IoT Data#Data Transformation
0 notes
Note
You’ve probably been asked this before, but do you have a specific view on ai-generated art. I’m doing a school project on artificial intelligence and if it’s okay, i would like to cite you
I mean, you're welcome to cite me if you like. I recently wrote a post under a reblog about AI, and I did a video about it a while back, before the full scale of AI hype had really started rolling over the Internet - I don't 100% agree with all my arguments from that video anymore, but you can cite it if you please.
In short, I think generative AI art
Is art, real art, and it's silly to argue otherwise, the question is what KIND of art it is and what that art DOES in the world. Generally, it is boring and bland art which makes the world a more stressful, unpleasant and miserable place to be.
AI generated art is structurally and inherently limited by its nature. It is by necessity averages generated from data-sets, and so it inherits EVERY bias of its training data and EVERY bias of its training data validators and creators. It naturally tends towards the lowest common denominator in all areas, and it is structurally biased towards reinforcing and reaffirming the status quo of everything it is turned to.
It tends to be all surface, no substance. As in, it carries the superficial aesthetic of very high-quality rendering, but only insofar as it reproduces whatever signifiers of "quality" are most prized in its weighted training data. It cannot understand the structures and principles of what it is creating. Ask it for a horse and it does not know what a "horse" is, all it knows is what parts of it training data are tagged as "horse" and which general data patterns are likely to lead an observer to identify its output also as "horse." People sometimes describe this limitation as "a lack of soul" but it's perhaps more useful to think of it as a lack of comprehension.
Due to this lack of comprehension, AI art cannot communicate anything - or rather, the output tends to attempt to communicate everything, at random, all at once, and it's the visual equivalent of a kind of white noise. It lacks focus.
Human operators of AI generative tools can imbue communicative meaning into the outputs, and whip the models towards some sort of focus, because humans can do that with literally anything they turn their directed attention towards. Human beings can make art with paint spatters and bits of gum stuck under tennis shoes, of course a dedicated human putting tons of time into a process of trial and error can produce something meaningful with genAI tools.
The nature of genAI as a tool of creation is uniquely limited and uniquely constrained, a genAI tool can only ever output some mixture of whatever is in its training data (and what's in its training data is biased by the data that its creators valued enough to include), and it can only ever output that mixture according to the weights and biases of its programming and data set, which is fully within the control of whoever created the tool in the first place. Consequently, genAI is a tool whose full creative capacity is always, always, always going to be owned by corporations, the only entities with the resources and capacity to produce the most powerful models. And those models, thus, will always only create according to corporate interest. An individual human can use a pencil to draw whatever the hell they want, but an individual human can never use Midjourney to create anything except that which Midjourney allows them to create. GenAI art is thus limited not only by its mathematical tendency to bias the lowest common denominator, but also by an ideological bias inherited from whoever holds the leash on its creation. The necessary decision of which data gets included in a training set vs which data gets left out will, always and forever, impose de facto censorship on what a model is capable of expressing, and the power to make that decision is never in the hands of the artist attempting to use the tool.
tl;dr genAI art has a tendency to produce ideologically limited and intrinsically censored outputs, while defaulting to lowest common denominators that reproduce and reinforce status quos.
... on top of which its promulgation is an explicit plot by oligarchic industry to drive millions of people deeper into poverty and collapse wages in order to further concentrate wealth in the hands of the 0.01%. But that's just a bonus reason to dislike it.
2K notes
·
View notes
Text
How to Become a Data Scientist: Essential Skills and Training Programs

In today's data-driven world, the role of a data scientist has become increasingly valuable across industries. With the ability to analyze complex data sets and derive meaningful insights, data scientists play a crucial role in decision-making processes for businesses. If you're interested in pursuing a career in this field, it's important to understand the essential skills and training programs that can help you become a successful data scientist.
Quantitative and Analytical Skills: First and foremost, data scientists must possess strong quantitative and analytical skills. They should have a solid foundation in mathematics, statistics, and computational methods. Proficiency in programming languages such as Python or R is also essential for data manipulation and analysis. These skills form the building blocks for data scientists to effectively understand, interpret, and work with data.
Data Visualization and Communication: Data scientists need to not only analyze data but also present their findings in a clear and concise manner. data visualization using Power Bi or Tableau allow them to create compelling visual representations of complex data sets. Effective communication skills, both written and verbal, are also required to explain their findings to non-technical stakeholders within the organization.
Machine Learning and Artificial Intelligence: Machine learning and artificial intelligence (AI) are fundamental tools in the data science toolkit. Data scientists must have a good understanding of various machine learning algorithms and techniques. Knowledge of frameworks like TensorFlow, PyTorch, or scikit-learn is essential for building predictive models, clustering, and classification tasks. Staying updated with the latest advancements in AI is crucial for harnessing the true potential of data science.
Domain Expertise: While technical skills are important, having domain expertise is equally vital for a data scientist. Understanding the industry or field in which you work allows you to ask the right questions, identify relevant data sources, and generate valuable insights. Whether it’s finance, healthcare, marketing, or any other sector, acquiring domain knowledge will set you apart as a data scientist.
Training Programs:
Academic Degrees: Pursuing a Bachelor's, Master's, or Ph.D. degree in a field like computer science, statistics, mathematics, or data science itself provides a solid foundation for a career in data science. These programs cover theoretical concepts, statistical methods, programming skills, and practical applications.
Online Courses and Bootcamps: There has been a rise in online platforms offering data science courses and bootcamps. Platforms like Coursera, edX, and Udacity provide comprehensive courses covering various aspects of data science, including programming, statistics, and machine learning with python. These courses are flexible and allow you to learn at your own pace.
Industry-Specific Training: Some organizations offer specialized training programs tailored to the specific needs of their industry. These programs often provide hands-on experience with real-world data sets and practical problem-solving scenarios. Attending industry conferences, webinars, and workshops can also supplement your learning journey.
Conclusion:
Becoming a successful data scientist requires a combination of technical expertise, analytical skills, and domain knowledge. By honing your skills in mathematics, programming, machine learning, and data visualization, you can navigate the world of data science more effectively. Additionally, pursuing academic degrees or enrolling in online courses and industry-specific training programs can provide you with a well-rounded education. With a consistent effort to upgrade your skills and stay updated with the latest trends, you can establish yourself as a valuable data scientist in today's data-driven landscape.
#data analysis#data science#programming#python#training course#artificial intelligence#data visualization#power bi#data analytics
1 note
·
View note
Text
"Power BI helps businesses collect, analyze, and visualize data. To boost your career, get Power BI certified. SCON Institute is a top Power BI certification training institute in Mumbai.
SCON Institute offers top-notch business analytics and data science training. A complete Power BI certification course covers data modeling, visualization, and report development. Hands-on Power BI training helps you solve real-world business problems.
SCON Institute offers Power BI certification training in Mumbai at a moderate cost. The cost of the training program depends on its length and certification level. SCON Institute offers inexpensive, high-quality training.
SCON Institute trainers have real-world Power BI experience. They understand the tool well and can offer practical advice on its use. Trainers will provide personalized advice throughout the training program.
SCON Institute offers other business analytics and data science courses than Power BI certification. This lets you learn more about this field.
SCON Institute offers affordable, high-quality Power BI certification training in Mumbai. One of the leading Power BI certification training institutions in the city, they offer thorough course content, hands-on instruction, and skilled trainers. contact-7676301459 [email protected]
powerbi #microsoft #businessintelligence #excel #dataanalytics #datascience
#"Power BI helps businesses collect#analyze#and visualize data. To boost your career#get Power BI certified. SCON Institute is a top Power BI certification training institute in Mumbai.#SCON Institute offers top-notch business analytics and data science training. A complete Power BI certification course covers data modeling#visualization#and report development. Hands-on Power BI training helps you solve real-world business problems.#SCON Institute offers Power BI certification training in Mumbai at a moderate cost. The cost of the training program depends on its length#high-quality training.#SCON Institute trainers have real-world Power BI experience. They understand the tool well and can offer practical advice on its use. Train#SCON Institute offers other business analytics and data science courses than Power BI certification. This lets you learn more about this fi#SCON Institute offers affordable#high-quality Power BI certification training in Mumbai. One of the leading Power BI certification training institutions in the city#they offer thorough course content#hands-on instruction#and skilled trainers.#contact-7676301459#[email protected]#powerbi#microsoft#businessintelligence#excel#dataanalytics#datascience
0 notes
Text
What kind of bubble is AI?

My latest column for Locus Magazine is "What Kind of Bubble is AI?" All economic bubbles are hugely destructive, but some of them leave behind wreckage that can be salvaged for useful purposes, while others leave nothing behind but ashes:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
Think about some 21st century bubbles. The dotcom bubble was a terrible tragedy, one that drained the coffers of pension funds and other institutional investors and wiped out retail investors who were gulled by Superbowl Ads. But there was a lot left behind after the dotcoms were wiped out: cheap servers, office furniture and space, but far more importantly, a generation of young people who'd been trained as web makers, leaving nontechnical degree programs to learn HTML, perl and python. This created a whole cohort of technologists from non-technical backgrounds, a first in technological history. Many of these people became the vanguard of a more inclusive and humane tech development movement, and they were able to make interesting and useful services and products in an environment where raw materials – compute, bandwidth, space and talent – were available at firesale prices.
Contrast this with the crypto bubble. It, too, destroyed the fortunes of institutional and individual investors through fraud and Superbowl Ads. It, too, lured in nontechnical people to learn esoteric disciplines at investor expense. But apart from a smattering of Rust programmers, the main residue of crypto is bad digital art and worse Austrian economics.
Or think of Worldcom vs Enron. Both bubbles were built on pure fraud, but Enron's fraud left nothing behind but a string of suspicious deaths. By contrast, Worldcom's fraud was a Big Store con that required laying a ton of fiber that is still in the ground to this day, and is being bought and used at pennies on the dollar.
AI is definitely a bubble. As I write in the column, if you fly into SFO and rent a car and drive north to San Francisco or south to Silicon Valley, every single billboard is advertising an "AI" startup, many of which are not even using anything that can be remotely characterized as AI. That's amazing, considering what a meaningless buzzword AI already is.
So which kind of bubble is AI? When it pops, will something useful be left behind, or will it go away altogether? To be sure, there's a legion of technologists who are learning Tensorflow and Pytorch. These nominally open source tools are bound, respectively, to Google and Facebook's AI environments:
https://pluralistic.net/2023/08/18/openwashing/#you-keep-using-that-word-i-do-not-think-it-means-what-you-think-it-means
But if those environments go away, those programming skills become a lot less useful. Live, large-scale Big Tech AI projects are shockingly expensive to run. Some of their costs are fixed – collecting, labeling and processing training data – but the running costs for each query are prodigious. There's a massive primary energy bill for the servers, a nearly as large energy bill for the chillers, and a titanic wage bill for the specialized technical staff involved.
Once investor subsidies dry up, will the real-world, non-hyperbolic applications for AI be enough to cover these running costs? AI applications can be plotted on a 2X2 grid whose axes are "value" (how much customers will pay for them) and "risk tolerance" (how perfect the product needs to be).
Charging teenaged D&D players $10 month for an image generator that creates epic illustrations of their characters fighting monsters is low value and very risk tolerant (teenagers aren't overly worried about six-fingered swordspeople with three pupils in each eye). Charging scammy spamfarms $500/month for a text generator that spits out dull, search-algorithm-pleasing narratives to appear over recipes is likewise low-value and highly risk tolerant (your customer doesn't care if the text is nonsense). Charging visually impaired people $100 month for an app that plays a text-to-speech description of anything they point their cameras at is low-value and moderately risk tolerant ("that's your blue shirt" when it's green is not a big deal, while "the street is safe to cross" when it's not is a much bigger one).
Morganstanley doesn't talk about the trillions the AI industry will be worth some day because of these applications. These are just spinoffs from the main event, a collection of extremely high-value applications. Think of self-driving cars or radiology bots that analyze chest x-rays and characterize masses as cancerous or noncancerous.
These are high value – but only if they are also risk-tolerant. The pitch for self-driving cars is "fire most drivers and replace them with 'humans in the loop' who intervene at critical junctures." That's the risk-tolerant version of self-driving cars, and it's a failure. More than $100b has been incinerated chasing self-driving cars, and cars are nowhere near driving themselves:
https://pluralistic.net/2022/10/09/herbies-revenge/#100-billion-here-100-billion-there-pretty-soon-youre-talking-real-money
Quite the reverse, in fact. Cruise was just forced to quit the field after one of their cars maimed a woman – a pedestrian who had not opted into being part of a high-risk AI experiment – and dragged her body 20 feet through the streets of San Francisco. Afterwards, it emerged that Cruise had replaced the single low-waged driver who would normally be paid to operate a taxi with 1.5 high-waged skilled technicians who remotely oversaw each of its vehicles:
https://www.nytimes.com/2023/11/03/technology/cruise-general-motors-self-driving-cars.html
The self-driving pitch isn't that your car will correct your own human errors (like an alarm that sounds when you activate your turn signal while someone is in your blind-spot). Self-driving isn't about using automation to augment human skill – it's about replacing humans. There's no business case for spending hundreds of billions on better safety systems for cars (there's a human case for it, though!). The only way the price-tag justifies itself is if paid drivers can be fired and replaced with software that costs less than their wages.
What about radiologists? Radiologists certainly make mistakes from time to time, and if there's a computer vision system that makes different mistakes than the sort that humans make, they could be a cheap way of generating second opinions that trigger re-examination by a human radiologist. But no AI investor thinks their return will come from selling hospitals that reduce the number of X-rays each radiologist processes every day, as a second-opinion-generating system would. Rather, the value of AI radiologists comes from firing most of your human radiologists and replacing them with software whose judgments are cursorily double-checked by a human whose "automation blindness" will turn them into an OK-button-mashing automaton:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
The profit-generating pitch for high-value AI applications lies in creating "reverse centaurs": humans who serve as appendages for automation that operates at a speed and scale that is unrelated to the capacity or needs of the worker:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
But unless these high-value applications are intrinsically risk-tolerant, they are poor candidates for automation. Cruise was able to nonconsensually enlist the population of San Francisco in an experimental murderbot development program thanks to the vast sums of money sloshing around the industry. Some of this money funds the inevitabilist narrative that self-driving cars are coming, it's only a matter of when, not if, and so SF had better get in the autonomous vehicle or get run over by the forces of history.
Once the bubble pops (all bubbles pop), AI applications will have to rise or fall on their actual merits, not their promise. The odds are stacked against the long-term survival of high-value, risk-intolerant AI applications.
The problem for AI is that while there are a lot of risk-tolerant applications, they're almost all low-value; while nearly all the high-value applications are risk-intolerant. Once AI has to be profitable – once investors withdraw their subsidies from money-losing ventures – the risk-tolerant applications need to be sufficient to run those tremendously expensive servers in those brutally expensive data-centers tended by exceptionally expensive technical workers.
If they aren't, then the business case for running those servers goes away, and so do the servers – and so do all those risk-tolerant, low-value applications. It doesn't matter if helping blind people make sense of their surroundings is socially beneficial. It doesn't matter if teenaged gamers love their epic character art. It doesn't even matter how horny scammers are for generating AI nonsense SEO websites:
https://twitter.com/jakezward/status/1728032634037567509
These applications are all riding on the coattails of the big AI models that are being built and operated at a loss in order to be profitable. If they remain unprofitable long enough, the private sector will no longer pay to operate them.
Now, there are smaller models, models that stand alone and run on commodity hardware. These would persist even after the AI bubble bursts, because most of their costs are setup costs that have already been borne by the well-funded companies who created them. These models are limited, of course, though the communities that have formed around them have pushed those limits in surprising ways, far beyond their original manufacturers' beliefs about their capacity. These communities will continue to push those limits for as long as they find the models useful.
These standalone, "toy" models are derived from the big models, though. When the AI bubble bursts and the private sector no longer subsidizes mass-scale model creation, it will cease to spin out more sophisticated models that run on commodity hardware (it's possible that Federated learning and other techniques for spreading out the work of making large-scale models will fill the gap).
So what kind of bubble is the AI bubble? What will we salvage from its wreckage? Perhaps the communities who've invested in becoming experts in Pytorch and Tensorflow will wrestle them away from their corporate masters and make them generally useful. Certainly, a lot of people will have gained skills in applying statistical techniques.
But there will also be a lot of unsalvageable wreckage. As big AI models get integrated into the processes of the productive economy, AI becomes a source of systemic risk. The only thing worse than having an automated process that is rendered dangerous or erratic based on AI integration is to have that process fail entirely because the AI suddenly disappeared, a collapse that is too precipitous for former AI customers to engineer a soft landing for their systems.
This is a blind spot in our policymakers debates about AI. The smart policymakers are asking questions about fairness, algorithmic bias, and fraud. The foolish policymakers are ensnared in fantasies about "AI safety," AKA "Will the chatbot become a superintelligence that turns the whole human race into paperclips?"
https://pluralistic.net/2023/11/27/10-types-of-people/#taking-up-a-lot-of-space
But no one is asking, "What will we do if" – when – "the AI bubble pops and most of this stuff disappears overnight?"
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2023/12/19/bubblenomics/#pop
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
tom_bullock (modified) https://www.flickr.com/photos/tombullock/25173469495/
CC BY 2.0 https://creativecommons.org/licenses/by/2.0/
4K notes
·
View notes
Text
Generative AI Is Bad For Your Creative Brain
In the wake of early announcing that their blog will no longer be posting fanfiction, I wanted to offer a different perspective than the ones I’ve been seeing in the argument against the use of AI in fandom spaces. Often, I’m seeing the arguments that the use of generative AI or Large Language Models (LLMs) make creative expression more accessible. Certainly, putting a prompt into a chat box and refining the output as desired is faster than writing a 5000 word fanfiction or learning to draw digitally or traditionally. But I would argue that the use of chat bots and generative AI actually limits - and ultimately reduces - one’s ability to enjoy creativity.
Creativity, defined by the Cambridge Advanced Learner’s Dictionary & Thesaurus, is the ability to produce or use original and unusual ideas. By definition, the use of generative AI discourages the brain from engaging with thoughts creatively. ChatGPT, character bots, and other generative AI products have to be trained on already existing text. In order to produce something “usable,” LLMs analyzes patterns within text to organize information into what the computer has been trained to identify as “desirable” outputs. These outputs are not always accurate due to the fact that computers don’t “think” the way that human brains do. They don’t create. They take the most common and refined data points and combine them according to predetermined templates to assemble a product. In the case of chat bots that are fed writing samples from authors, the product is not original - it’s a mishmash of the writings that were fed into the system.
Dialectical Behavioral Therapy (DBT) is a therapy modality developed by Marsha M. Linehan based on the understanding that growth comes when we accept that we are doing our best and we can work to better ourselves further. Within this modality, a few core concepts are explored, but for this argument I want to focus on Mindfulness and Emotion Regulation. Mindfulness, put simply, is awareness of the information our senses are telling us about the present moment. Emotion regulation is our ability to identify, understand, validate, and control our reaction to the emotions that result from changes in our environment. One of the skills taught within emotion regulation is Building Mastery - putting forth effort into an activity or skill in order to experience the pleasure that comes with seeing the fruits of your labor. These are by no means the only mechanisms of growth or skill development, however, I believe that mindfulness, emotion regulation, and building mastery are a large part of the core of creativity. When someone uses generative AI to imitate fanfiction, roleplay, fanart, etc., the core experience of creative expression is undermined.
Creating engages the body. As a writer who uses pen and paper as well as word processors while drafting, I had to learn how my body best engages with my process. The ideal pen and paper, the fact that I need glasses to work on my computer, the height of the table all factor into how I create. I don’t use audio recordings or transcriptions because that’s not a skill I’ve cultivated, but other authors use those tools as a way to assist their creative process. I can’t speak with any authority to the experience of visual artists, but my understanding is that the feedback and feel of their physical tools, the programs they use, and many other factors are not just part of how they learned their craft, they are essential to their art.
Generative AI invites users to bypass mindfully engaging with the physical act of creating. Part of becoming a person who creates from the vision in one’s head is the physical act of practicing. How did I learn to write? By sitting down and making myself write, over and over, word after word. I had to learn the rhythms of my body, and to listen when pain tells me to stop. I do not consider myself a visual artist - I have not put in the hours to learn to consistently combine line and color and form to show the world the idea in my head.
But I could.
Learning a new skill is possible. But one must be able to regulate one’s unpleasant emotions to be able to get there. The emotion that gets in the way of most people starting their creative journey is anxiety. Instead of a focus on “fear,” I like to define this emotion as “unpleasant anticipation.” In Atlas of the Heart, Brene Brown identifies anxiety as both a trait (a long term characteristic) and a state (a temporary condition). That is, we can be naturally predisposed to be impacted by anxiety, and experience unpleasant anticipation in response to an event. And the action drive associated with anxiety is to avoid the unpleasant stimulus.
Starting a new project, developing a new skill, and leaning into a creative endevor can inspire and cause people to react to anxiety. There is an unpleasant anticipation of things not turning out exactly correctly, of being judged negatively, of being unnoticed or even ignored. There is a lot less anxiety to be had in submitting a prompt to a machine than to look at a blank page and possibly make what could be a mistake. Unfortunately, the more something is avoided, the more anxiety is generated when it comes up again. Using generative AI doesn’t encourage starting a new project and learning a new skill - in fact, it makes the prospect more distressing to the mind, and encourages further avoidance of developing a personal creative process.
One of the best ways to reduce anxiety about a task, according to DBT, is for a person to do that task. Opposite action is a method of reducing the intensity of an emotion by going against its action urge. The action urge of anxiety is to avoid, and so opposite action encourages someone to approach the thing they are anxious about. This doesn’t mean that everyone who has anxiety about creating should make themselves write a 50k word fanfiction as their first project. But in order to reduce anxiety about dealing with a blank page, one must face and engage with a blank page. Even a single sentence fragment, two lines intersecting, an unintentional drop of ink means the page is no longer blank. If those are still difficult to approach a prompt, tutorial, or guided exercise can be used to reinforce the understanding that a blank page can be changed, slowly but surely by your own hand.
(As an aside, I would discourage the use of AI prompt generators - these often use prompts that were already created by a real person without credit. Prompt blogs and posts exist right here on tumblr, as well as imagines and headcannons that people often label “free to a good home.” These prompts can also often be specific to fandom, style, mood, etc., if you’re looking for something specific.)
In the current social media and content consumption culture, it’s easy to feel like the first attempt should be a perfect final product. But creating isn’t just about the final product. It’s about the process. Bo Burnam’s Inside is phenomenal, but I think the outtakes are just as important. We didn’t get That Funny Feeling and How the World Works and All Eyes on Me because Bo Burnham woke up and decided to write songs in the same day. We got them because he’s been been developing and honing his craft, as well as learning about himself as a person and artist, since he was a teenager. Building mastery in any skill takes time, and it’s often slow.
Slow is an important word, when it comes to creating. The fact that skill takes time to develop and a final piece of art takes time regardless of skill is it’s own source of anxiety. Compared to @sentientcave, who writes about 2k words per day, I’m very slow. And for all the time it takes me, my writing isn’t perfect - I find typos after posting and sometimes my phrasing is awkward. But my writing is better than it was, and my confidence is much higher. I can sit and write for longer and longer periods, my projects are more diverse, I’m sharing them with people, even before the final edits are done. And I only learned how to do this because I took the time to push through the discomfort of not being as fast or as skilled as I want to be in order to learn what works for me and what doesn’t.
Building mastery - getting better at a skill over time so that you can see your own progress - isn’t just about getting better. It’s about feeling better about your abilities. Confidence, excitement, and pride are important emotions to associate with our own actions. It teaches us that we are capable of making ourselves feel better by engaging with our creativity, a confidence that can be generalized to other activities.
Generative AI doesn’t encourage its users to try new things, to make mistakes, and to see what works. It doesn’t reward new accomplishments to encourage the building of new skills by connecting to old ones. The reward centers of the brain have nothing to respond to to associate with the action of the user. There is a short term input-reward pathway, but it’s only associated with using the AI prompter. It’s designed to encourage the user to come back over and over again, not develop the skill to think and create for themselves.
I don’t know that anyone will change their minds after reading this. It’s imperfect, and I’ve summarized concepts that can take months or years to learn. But I can say that I learned something from the process of writing it. I see some of the flaws, and I can see how my essay writing has changed over the years. This might have been faster to plug into AI as a prompt, but I can see how much more confidence I have in my own voice and opinions. And that’s not something chatGPT can ever replicate.
151 notes
·
View notes
Text
the inevitable tension between:
In an effort to demystify programming, which many people still think of as a skill that requires some sort of exceptional intelligence or training, I will always emphasize that basically anyone can do simple programming tasks like creating their own browser extension or simple website, using SQL to query a database, making data visualizations with matplotlib, etc. You can learn to do this in a week. If you're not sure you can for real just ask me (dm here or on Discord). it IS easy and anyone CAN do it! This isn't me saying "learn to code to get a job," which used to be decent advice 5 years ago but isn't any more because of AI and market saturation. More just "if there is something you want to do, at work or as a personal thing, that requires programming skills you think you might not have, I bet you can do it anyway. Do it!"
Okay maybe the reason I think this way is because my dad was teaching me Java when I was literally 8 years old
103 notes
·
View notes
Text
certain stars (part 2) - a Shigaraki x reader fic

Nothing in your training prepared you for this: A deadly virus that burnt through Space Station Ultra, leaving only two survivors -- you, and Mission Specialist Shigaraki, trapped together in the command module. With time, food, and life-support running out, you have a choice about how you'll spend your final hours. You just wish you had any idea what you're supposed to do.
This is for @shigarakislaughter (happy birthday!) who asked for a forced-proximity roommates to lovers situation. Being me, I had to make it weird, and being one of my fics, it had to get away from me. Part 1 can be found here! Shigaraki x reader, rated M, space station au, angst + suggestive content. dividers by @cafekitsune.
part 1

You’ve been on the line with Mission Control for four hours, in a conversation that includes you only tangentially, and your eyes are starting to blur. This plan to save your life and Shigaraki’s without carrying the virus back to Earth was your idea. You have to be here to advocate for it, to address any questions Control might have, to find a way around any problems that might arise. You’re the pilot in command. It’s your job to get yourself and the last remaining member of your crew home.
But you’re so tired. It’s all you can do to write down the figures that are being named, calculating trajectories and fuel burns by hand to fact-check Mission Control’s results. It’s hard to do when they still haven’t decided if it’s safe for you and Shigaraki to return to Earth. The suspense would be killing you if you had any adrenaline left to spare.
As Mission Control continues to debate, no one willing to come right out and say that they’re not sure it’s a good idea to bring you back, Shigaraki slips into the seat beside you. You sent him into the shuttle with step-by-step instructions for running a full diagnostic, and he slides the results across the desk to you. You study them, the numbers difficult to read until you squeeze your eyes shut and open them again. Then you tap your mic and interrupt one of the flight director’s proteges in the middle of a soliloquy about reentry speed. “I have the shuttle diagnostics. All systems are operational.”
“What about the heat shield?”
That’s a sticking point. One of many. “Protocol is to do a visual inspection, but we can’t risk a spacewalk. Is there any way we can get a satellite view?”
You hear paper rustling, then a thud. It’s all too easy to picture one of the ensigns getting up in a hurry, tripping over themselves, and falling flat on their face before beelining to the comms center. “We’re investigating the prospect,” Director Sasaki says. “Every participating nation has offered their help, as have several non-participants and several corporate entities. If they elect to put their money where their mouths are, we should be able to give you multiple views of the heat shield.”
You nod, then remember they can’t see you. “Can someone check a compromise rate?”
“The compromise rate depends on your reentry angle,” the flight director says. You think her name’s Tatsuma. You’ve only met her once. “And your reentry angle depends on your landing site.”
“Which hasn’t been decided yet,” Shigaraki says, into your microphone, “because you jag-offs can’t make up your minds about whether we’re coming back at all.”
“Get your own headset,” you hiss, shooing him away. “Mission Specialist Shigaraki has a point. All of this is theoretical unless it’s safe to come home.”
“We told you that already,” Director Todoroki snaps. You roll your eyes. “Were you listening?”
You were probably trying to do math. You rub your eyes, and Shigaraki speaks into your mic again. “I didn’t hear it.”
Director Todoroki heaves a big, nasty sigh, and Director Bate, the current head of the space station program, speaks up. “Based on the data your crew collected, the virus thrives in the same conditions humans do. Extreme cold renders it inert, while extreme heat destroys it. The heat from reentry should cook that thing right off the exterior of the shuttle. Your return to Earth should be safe, as long as you land in the right place.”
“Only two concerns remain,” Director Sasaki says. “First, whether the damage your plan to purge the virus from Station Ultra will cause is worth the reward –”
You appreciate him giving it to you straight. “And secondly, whether the likely expulsion of your deceased crewmates’ bodies into space is an acceptable result.”
“Yeah,” you say. You’re too tired to stick to formal speech. “I thought that might be it.”
Your plan to clear Station Ultra of the virus involves blowing the airlocks on each of the infected modules, which will suck the virus back out into space, where it’ll go back to hibernating. It’ll work, but it’s likely to take the bodies of the crew with it. And the space program’s unofficial and unstated policy has always been to bring all the crewmembers home, dead or alive.
“Um –” Someone in Mission Control clears their throat. “I feel terrible saying this, but we can’t bring their bodies home. They died of the virus. They’re probably still carrying it. Asking the pilot and mission specialist to retrieve them is an unacceptable risk, and we can’t risk live virus entering the atmosphere.”
Someone protests. Dr. Shield, maybe – Dr. Shield, whose daughter died in the lab module, conducting research on the virus right up until it killed her. Director Tatsuma waits for him to finish, then speaks up. “The flight academy prepares its graduates for this. They are aware that this is the likely scenario if they should die outside the atmosphere.”
“The astronauts, sure. The mission specialists have families,” someone argues. You don’t know that voice. Your head hurts. “What are we supposed to tell them? That we just launched their loved ones’ corpses into space?”
“Yeah.” Shigaraki’s finally put on his headset. “Everybody who died here was a better person than me, and if I died up here, I wouldn’t care what the survivors did with my body.”
It’s quiet for a second. “Unless they wanted to eat it.”
You feel insane, hysterical laughter bubbling in the back of your throat and swallow it down. “I think you should ask the mission specialists’ families,” you say. “It’s their loved ones up here. Tell them what we’re up against and ask them what they want to do.”
“That’s unwise,” Director Sasaki says. There’s a pause. “We will reach out to them. Continue your preflight preparations, and we’ll contact you when a full protocol has been devised.”
The call drops, and you take off your headset. It doesn’t make your head hurt any less, but you’ll give it time. Next to you, Shigaraki does the same. “How long do you think it’ll take them to tell us no?”
You knew your crewmates, astronauts and mission specialists both. You met their families. You’re not convinced it’ll be a yes, but you’re not sure it’ll be a no, either. And there’s one crewmember you haven’t known long enough to make a guess. “Would you really be okay with your body being shot out into space?”
“Sure. Not like anybody’s waiting for it at home.” Shigaraki shrugs. “If you were starving, you could eat my corpse.”
This time, you don’t have to suppress your laughter. “Just me, though?”
“What, do you want to share or something?”
“No,” you say. You glance at him, noting the way-too-prominent bruise on his neck, remembering that there’s one just like it on his shoulder. He seemed into it, and you were into his reaction, so you went a little overboard. “I’m not good at sharing.”
Shigaraki’s pale enough that even the faint flush in his cheeks is as obvious as a neon sign. “Don’t act possessive. You only hooked up with me because we’re going to die soon.”
There’s a lot to address there, and you’re too tired to do it delicately. “We’re not going to die soon. I’ll find a way to get you home. I didn’t think you liked me. I only hooked up with you because I thought we were about to die. If we weren’t about to die we’d have gone on dates first.”
Shigaraki is staring at you now, eyes wide. Did you even speak a recognizable language, or were you just mumbling to yourself about nothing? You really don’t want to have to say it all again. You look away from him, even though it’s hard to do, and look down at your sheet of calculations. You can barely read them. You find a new piece of paper and start copying them down again. “What is that?” Shigaraki asks, peering over your shoulder as you rewrite equation after equation. “I thought we didn’t have a trajectory yet.”
“We don’t. But the basic reentry calculations were made assuming that the shuttle is at capacity, and it’s – not.” Not even close. “We’ll be coming down light. That changes things.”
“Huh.” Shigaraki’s chin comes to rest over your shoulder. “Why are you doing it by hand?”
“That was how they used to do everything,” you say. “Back in the early days. But the academy still teaches it, in case we lose contact with Mission Control or the onboard computer goes down. They don’t want us to be totally helpless without it.”
“Huh,” Shigaraki says again. “That’s a lot of physics for a bunch of meatheads.”
“Yeah. Almost like we aren’t meatheads after all.” You copy out the last equations, then elbow Shigaraki until he straightens up. “Check these for me, okay?”
“You don’t trust your calculations?”
“I can barely see straight,” you say. Shigaraki blinks. “I haven’t slept more than an hour or two at a stretch since this started, and this isn’t the kind of thing where mistakes are survivable. You’re an actual physicist. Just look at them.”
“Sure.” Shigaraki flips over the shuttle diagnostic and starts writing on the back.
You fold your arms on the console and rest your head on them, watching him work. You like seeing him locked in on something, even if you wish he’d stop scratching his neck with his free hand, and you wonder what his research profile looks like. What he works on when he’s not getting tossed into a shuttle he doesn’t want to be on. He must be in a lab or something. Or have his own. So –
Something occurs to you. “Should I have been calling you Dr. Shigaraki this whole time? Some people get mad about their titles not being used.”
“Some people are assholes,” Shigaraki says matter-of-factly. “I might be an asshole, but I’m not that kind of asshole.”
He frowns at something he’s just written. “Show me your first set of calculations.” You hand it over, and he identifies the mistake in seconds. “You rewrote it wrong on this page. With this reentry velocity we’d bounce right off the atmosphere.”
“This is why you needed to check it.”
“You got it right the first time,” Shigaraki says. His hand falls from the side of his neck to rest on the console, then edges out into the space between the two of you. You spend a little too long looking before it occurs to you to touch.
A green light starts blinking on the console, indicating a call from Control. You yank your hand away from Shigaraki’s and pull your headset on. “Yes?”
“The families of the mission specialists agreed to your plan,” Director Sasaki says, and exhaustion sweeps over you. Shigaraki is looking at you questioningly. You give a thumbs-up. “However, they requested some sort of commemoration before the airlocks are blown.”
You’ll think of something. “Understood. I’ve adjusted the reentry calculations to account for the lighter payload. Dr. Shigaraki is checking my work as we speak.”
Dr. Shigaraki is also rolling his eyes, but you don’t need to mention that. “We’ve developed a launch protocol,” Sasaki informs you, “which should account for a lighter payload. We also have identified a landing site for you, one which will render any surviving virus inert.”
“Yes,” Director Tatsuma says. “You’ll be aiming for the Ross Ice Shelf.”
You haven’t touched the airlocks, but it still feels like every iota of breathable air has just been sucked out of your lungs. “The – what?”
“A cold environment with little for the virus to feed on, in the unlikely event that any of it is left after reentry,” Sasaki says. “Rest assured, you will have plenty of runway. Do you have any questions?”
You can’t even get your mind around the thought. It feels unreal, like you’ve stumbled through a funhouse mirror into some other reality. Director Sasaki takes your silence for agreement and moves on. “We’ll plan to launch in six hours. In that time you will need to initiate a complete data transfer – everything from Station Ultra, in order to allow for proper diagnostics. Begin the procedure by –”
“I’ll do it.” Shigaraki cuts Director Sasaki off. He looks at you. “You’re going to sleep.”
You look at him blankly. Sasaki’s voice takes on a sharp edge. “The procedure is supposed to be completed by the commanding officer.”
“Yeah. Only you want the commanding officer to land the shuttle on an ice sheet in fucking Antarctica in six hours,” Shigaraki says. “The commanding officer’s going to rest until then. I’ll do your data transfer.”
It’s quiet for a second. “You will need to write this down.”
“I need to get a pen.” Shigaraki takes off his headset, takes off yours, and pulls you away from the console, back to the pile of blankets. “Why didn’t you say you weren’t sleeping when it was your turn?”
“You were having a hard time sleeping, too. It didn’t –” You break off as Shigaraki half-lifts you off your feet, then sets you down on the blankets. “I thought you hated zero gravity.”
“It has one or two perks.” Shigaraki pulls the blankets roughly over you, then fumbles in his flightsuit pocket. “Here.”
You find yourself looking at an old-style MP3 player, headphones already plugged in. You tuck one of them into your ear, and Shigaraki presses play. “What am I listening to?”
“The music,” Shigaraki says. You blink at him. “Musica universalis, on a loop. It helps me sleep.
You hear the first of the high, clear notes, reverberating off into infinity, and hide a yawn. “That’s not very restful.”
“It doesn’t need to be restful. It just needs to keep you calm.” Shigaraki tucks the other headphone into your ear without asking first, his roughened fingertips oddly gentle. “That’s what it sounds like in interstellar space. You’d hear it on your trip to Alpha Centauri and back.”
Your throat tightens, even as your eyelids grow heavy. “Get some sleep,” Shigaraki says. You catch his hand as he straightens up, holding on tight, wishing you knew what to say to him. Like you did when they told you about the landing site, you come up empty. The best you can do is give one more squeeze and let go, before you turn your head against a makeshift pillow that smells like him and fall asleep, the sound of space humming in your ears.
You settle into the shuttle’s cockpit, wrapping your gloved hands around the controls and watching the console come to life. You’ve piloted a shuttle up to Station Ultra three times, but this will only be your second reentry, and it’ll be a hell of a reentry. For a split second, you allow it to fill your mind, oozing into every corner of your thoughts, sending shooting pains through your fingers. What they’re expecting you to do is impossible. It can’t be done.
And then you glance sideways, at Shigaraki strapped into the copilot’s seat. The instant the shuttle detaches from Station Ultra, his fate is out of his hands and firmly in yours. He looks scared enough on his own. He doesn’t need to see it from you, too.
You take a deep breath, then let it go. “Walk me through the preflight checklist.”
Mission Control is in Director Tatsuma’s hands at the moment. One of her proteges takes you through it, system by system – propulsion, shielding, navigation, life-support, everything coming up positive. The satellite photos of the heat shield revealed a few tiny abnormalities, nothing that should cause trouble. Then again, there shouldn’t be viruses floating around in space.
Something occurs to you, and in the middle of a stir of the oxygen tanks, you find yourself laughing. “What?” Shigaraki demands. “What’s funny?”
“The virus,” you say. Shigaraki looks at you like you’re out of your mind. “It’s an extraterrestrial. We found the first alien.”
“From a research perspective, this was a very fruitful trip,” one of the ensigns pipes up. “The first confirmed contact with alien life, the first recordings of Shigaraki phenomena –”
Shigaraki coughs. “Of what?”
“And the first loss of a space station, Ensign Hado. Read the room,” Director Sasaki says severely. “All systems are go. Were you able to come up with a commemoration to share as you depressurize the modules?”
“Um, High Flight is traditional,” you say. “But it’s religious, and not everybody’s religious, so – I have a different one. Should I use that?”
“Can you deliver it while completing the depressurization sequence?”
“Yes.”
“Then begin the sequence with Module Five.”
Module Five was the dormitory module. Five of your crewmates died there. You blow the airlock and speak. “We never know how high we are, til we are called to rise.” Module One is next. You avert your eyes. “And then, if we are true to plan, our statures touch the skies –”
You blow Modules Three and Four next, sending Station Ultra into a calculated spin. In the seat next to you, Shigaraki closes his eyes, his jaw clenched. “The heroism we recite,” you continue, blowing the airlock on Module Six, “would be a daily thing; did not ourselves the cubits warp –”
Module Two. “For fear to be a king.” You squeeze your eyes shut, thinking of your crew, dead in the atmosphere, lost to the void. How they kept fighting, kept studying, until the very end. “Depressurization sequence complete.”
“Detach.”
“Detaching in three – two – one.” You disengage the seal between the shuttle’s airlock and the command module, pitch the nose of the shuttle down, and let the stolen momentum from the station’s spin carry you down towards the atmosphere. “Departing high orbit. Any updates to the trajectory?”
“Not as yet, but owing to the uniqueness of the landing site, a pilot who had the opportunity to fly the route in the simulator will –”
“I’m gonna be sick,” Shigaraki mumbles.
You glance over at him and see him taking his helmet off. “If you don’t put that back on right now, I’m going to –”
“Trouble in paradise?” A familiar voice comes in over the intercom, and your frustration with Shigaraki takes an instant backseat. “Long time no see, airhead.”
“Not long enough, birdbrain,” you mutter, and Hawks chuckles into the mic. “Flew this in the simulator, did you?”
“Easy as pie, at least for me,” Hawks says. If you make it through this, you’re going to beat him to death with his helmet. “But don’t you worry, Dr. Shigaraki. You’re in good hands with Airhead here. Second in our class at the Flight Academy. Want to guess who was first?”
“We tied,” you snap, over the sound of Shigaraki gagging into an airsickness bag. Neither of you have enough food in your stomachs to really vomit. “You’re not first just because they called our names in alphabetical order. Do you want to talk shit or beta this trajectory?”
“We can talk shit when you land,” Hawks agrees. “Okay. Your current angle looks good. On the count of five, initiate a two-second burn from your starboard engine. Five – four – three – two – one –”
You trigger the burn, your grip on the controls as relaxed as you can make it, and the shuttle dips sideways. The flight roughens almost immediately, rattling the entire cockpit as you brush against the atmosphere, then skip off again. “Ooh, okay. It looks like you’re not in the atmosphere yet,” Hawks says. You can’t tell if he’s mimicking the flight simulator’s voice or not, but you’re still going to kill him when you get back. “Let’s do another burn – two seconds, both engines –”
The shuttle’s left wing dips into the atmosphere without being repelled, and you feel the lurch as gravity takes hold and pulls. “Autopilot will do the rest,” Hawks says. “Nice and easy.”
It’s not. The shuttle’s too light – too light for gravity to pull you the rest of the way in, and the longer you spend in the atmosphere, the more likely it is that something will go wrong with the heat shield. The cockpit is heating up way too fast. “I’m doing another burn. Both engines.”
“The autopilot said –”
“It’s not flying this mission,” you snap. There’s a reason shuttles aren’t flown completely on autopilot. Autopilot can’t adapt. “I am. If we stay in here any longer, the virus isn’t the only thing that’s going to cook. Burn in three – two – one –”
It works this time. The shuttle leaves space behind and plunges into the thermosphere, and the cockpit rattles and heats up, growing hotter and hotter with every nanosecond that passes. It’s killing the virus, you remind yourself. You’re in a shuttle with a heat shield, but the virus is clinging to the hull, and it’ll be destroyed. Reentry always feels like hell, anyway. Somehow it’s so much worse when you know you’re almost home.
Shigaraki’s got his helmet back on, finally. You can hear his ragged breathing over the comms. Is he conscious? “Stay with me, Shigaraki. This part is normal.”
“This part blows,” Shigaraki mumbles through clenched teeth. “Tomura.”
“Hmm?”
“My name is Tomura.” He’s slumping sideways in his chair, limp against the restraints, his speech slurring. “Call me that.”
“Okay, you got it. Tomura.” You feel a brief twinge of embarrassment that you didn’t think to ask his given name before you hooked up with him. “If I call you Tomura, are you going to stay awake? I really need you to stay awake. We’re going to lose comms with Mission Control in a second here and I don’t want to do this alone.”
Hawks chooses that moment to break in. “You were right about the burn, but you’re coming in way too fast. Hit the brakes.”
“I can’t do that. I need the parachutes for the landing.” You take your eyes off the windscreen for a split second to check your position on the map. “If I cut momentum right now, we won’t make it to the landing zone.”
“And if you don’t cut speed, you’ll pancake into the ice at Mach 10!”
“If I hit the water and there’s virus left on the hull, that’s it. For everyone!” You hate the way your voice pitches up, cracks. “I’m getting to Antarctica, Hawks. One way or another.”
Hawks starts to say something else, but the comms cut off in a static flatline, just like they’ve done at this point on every reentry you’ve flown. It’s the first normal thing that’s happened on this flight, and it hits you like a splash of cold water across the back of your neck. This is a reentry flight. You studied this at the academy. What does a pilot do on reentry to cut altitude and gradually reduce speed? There has to be something. Somewhere –
The answer occurs to you, in the same moment as Shigaraki stirs in his seat beside you. “Hey,” you say quickly, keeping your voice calm. “Welcome back.”
“Are we there yet?” Shigaraki’s voice blurs. “Is it over?”
“We’re through the atmosphere,” you admit, “but we’ve got a problem. I don’t know how much you heard, but –”
“Too fast.” Shigaraki sits up with an effort. His expression is grim through his helmet’s visor. “Either we crash into the ice and kill ourselves, or crash into the ocean and kill everybody else.”
“Or we land on the icesheet and everybody lives.” You reach for the control panel and start making the adjustments, ignoring the alarms that sound. “There’s a way to land this shuttle.”
“How?” Shigaraki’s hands clamp down tightly on the armrests. “If we were going to die anyway, we should have stayed up there.”
“Why?” you ask. You check your trajectory one last time, then kill the engines. “It wasn’t worth it to try to get home?”
“Maybe. Except –” Shigaraki peels one hand off the armrest and clamps it down over his mouth as you put the shuttle into a gentle bank. “Don’t ask. Tell me what you’re doing.”
“I need to cut our speed, but if I deploy the parachutes now, I won’t have them to slow us down during the actual landing. So I’m going to slow us down the old-fashioned way. Like a glider.” You can tell that none of what you’re saying makes sense to Shigaraki. You keep talking anyway, adjusting the controls to create a gentle turn. “In the academy they make us study all kinds of aviation accidents. There were a couple where the aircraft lost both engines and had to descend and land without them. One time a flight crew landed a plane on a river like that and everybody got out alive.”
You can tell Shigaraki’s getting nauseous. Then again, you’re flying the shuttle like you’re going down an endless set of switchbacks, trimming speed by fractions on each one. “You’re the physics guy. Tell me what will happen if I burn enough momentum on the descent.”
“If I open my mouth I’ll hurl.” Shigaraki speaks through clenched teeth. If you actually succeed in landing this thing, he’ll wind up with the worst tension headache in history. “You know what you’re doing. Keep talking.”
You keep talking, narrating your bizarre flight pattern as the shuttle travels around the world once, then again, spiraling down with painful slowness. If this was a normal flight, you’d have hit your landing site already, and space shuttles aren’t designed with long-term atmospheric flight in mind. But just because they aren’t designed for it doesn’t mean they’re incapable of it. You’re not putting this thing through any ridiculous maneuvers. Just curving gently down, one S-turn after another, letting physics and gravity take care of the rest. Pilots before you have done this and lived. Pilots after you will do it and survive, too. You just hope none of them have to do it in a shuttle.
When you drop out of the upper atmosphere, gentle flight goes out the window. You’re still coming down fast, and your landing site is approaching. One more trip around the world and you’ll be there, and if you don’t land then, you won’t have enough altitude to make another rotation. You bring the engines back gently, get ready to pull the brakes. “This is it,” you tell Shigaraki. You risk the smallest glance his way. He’s pale, his brow furrowed, his mouth pressed into a thin line. “I’ve got this. It’ll be okay.”
“Promise?”
“Yeah.” You check your speed and your stomach lurches. Mission Control had better have given you the longest runway in aviation history. You complete a final S-curve, as long and winding as possible, then line yourself up. “Deploying landing gear.”
The landing gear won’t survive contact with the ice, but you don’t need it to; you just need the extra drag it’ll provide. Brakes next, starting out slow, then pushing harder by the second as your airspeed indicators begin to drop. You don’t even want to think about how fast you’re descending. The ground rushes up to meet you, and the ground proximity alarm starts to sound. TERRAIN. TERRAIN. PULL UP. “I can see it,” you snap at nothing. “Shut up.”
You’re not slow enough yet. You deploy the parachutes while you’re still in the air, and all at once you’re wrestling with the controls, diverting all power to hydraulics in order to maintain a steady flight. “Brace,” you order, like you’re a flight attendant on a plane that’s about to crash with no survivors. “Any second –”
The initial impact jars every bone in your body, and the next is just the same. The shuttle is acting like a skipping stone, touching down and bouncing up, and you already deployed the chutes. As if the bouncing’s not enough, every touchdown brings a series of jolts as the landing gear makes contact with the uneven terrain. You hit the brakes, pitch the nose of the shuttle ever so slightly up, and slam the back wheels down so hard that they crumple like a tin can.
Control’s going to kill you for how much damage you’re doing to the shuttle, but you can feel the drag reducing. Your skipping-stone maneuver devolves into a long skid across the ice, slowing by degrees, as you scan the horizon through the windscreen. No sign of the ocean. As far as you can see, there’s only ice.
Your console chimes, and you take a look at the indication. Hysterical laughter spills out of your mouth. “What?” Shigaraki asks. “Did we crash?”
“No,” you say, although you’re pretty sure the shuttle techs are going to disagree. “You’ll be interested to know that we’ve reached appropriate landing speed.”
“You’re out of your mind,” Shigaraki says, and you laugh harder. “We’re landing?”
“Landed,” you say. The shuttle bobs up once more, and you drop the nose down for a final time, planting it firmly into the ice. “Sorry. Now we’re landed.”
You cut the engines, open the comms channel to establish contact with Control, and start going through your post-flight checklist. Beside you, Shigaraki unbuckles his seat. “I’d stay down if I were you,” you say, knowing he won’t listen. “It’ll be just –”
He drapes himself over the back of your seat, his helmet knocking against yours. The move would startle you if you had any nerves left. As it is, you’re just bemused. “What are you doing?”
“If we died up there, we’d have died like this.” Shigaraki’s arms come up around you, holding on tight. “You’re not getting out of it just because we lived.”
“If that’s how it’s going to be, you owe me a date,” you say. You depressurize the cabin, taking off your helmet the instant there’s outside air to breathe. Shigaraki takes his off, then presses his face into the side of your neck in a way that makes your face heat up. “At least one.”
“That landing of yours took ten years off my life. You own me ten.”
Before you can argue back, the comms squawk to life. “This is Mission Control. Do you read?”
“We read, birdbrain,” you say, and Hawks laughs. You can hear cheering in the background, and you’ve been at Control during enough reentries to picture the scene perfectly. “You blew your landing site by a thousand kilometers, but we’ve got your position. Welcome back to Earth.”
“A drone is on its way to scan the hull for evidence of the virus,” Director Sasaki says into the microphone. “Once we’ve confirmed its absence, our extraction team will come to retrieve you.”
“In the meantime, sit tight,” Director Tatsuma says. There’s a pause. “Well done, Commander. That was quite a landing.”
“We made it,” you say. Your hands are shaking on the controls, and you pull them away. The instant they’re clear, Shigaraki grabs one, peeling it out of its glove. “That’s good enough.”
Tatsuma signs off, after instructing you to run a diagnostic and transmit the results, and you key in the command one-handed. Shigaraki’s got your other one pressed against his face. His skin is warm, his lips dry and cracked. His voice is muffled when he speaks. “I knew you could do it.”
“Yeah?” Your hand is shaking, no matter how you try to hold it still. Shigaraki presses it harder against his cheek. “How?”
“You promised.” Shigaraki’s voice is matter-of-fact, even if it’s rattling just as badly as yours. You give it a few more minutes before one or both of you goes into shock. “What happens now?”
“I don’t know.” There’s never been a mission like this in human history. You hope it never happens again. “Thanks for trusting me to get us home.”
This time, the pressure of Shigaraki’s mouth against your hand can’t be called anything but a kiss. “Any time.”
“I have good news, and I have news,” Yamada, the space program’s PR director, says from the other side of the glass. “Which one do you want first?”
You and Tomura glance at each other. “News,” you say, and Tomura’s grip on your hand tightens. “What’s going on?”
“I’ll start with the good news,” Yamada says, and Tomura rolls his eyes. “The good news is that you guys are cleared. You’re getting out of quarantine tonight, and there’s a hell of a welcoming party waiting for you. Your family’s here – and your friends, Dr. Shigaraki – and they’re hyped to see you.”
“Finally,” Tomura mutters. He won’t let you call him Dr. Shigaraki, or even just Shigaraki – it’s his name or nothing. “What’s the news?”
“The news is that there’s going to be press everywhere,” Yamada says, and sighs. “We’ve been beating them off with a stick, but we’ve been ordered to host a press conference, and they’re going to want to hear from you. I need to prep you for the kind of questions they’ll ask.”
“Go for it,” you say. Yamada grimaces. “What?”
“The media loves a narrative,” Yamada says. “The coverage of the Station Ultra disaster has been wall-to-wall for weeks, and so far, the only narrative they’ve been able to spin is a horror story. Which is what it is. It’s the worst loss of life in the history of spaceflight, and it was nothing anyone was prepared for. Things have been pretty dark. They want something else. And unfortunately, that something else is you.”
Tomura makes a face. You’re pretty sure you’re making the same one. “What does that mean?”
“If there’s anything redeemable about the mission, it’s attached to you two,” Yamada says. “The discovery of Shigaraki phenomena –”
“Stop calling it that,” Tomura says. “It sounds stupid.”
“It’s tradition, as far as I understand it. New stuff is named after the person who discovered it,” Yamada says. “There’s that, and then there’s that crazy landing the commander here pulled off. They’ve had pilots in simulators all around the world trying to copy that landing. Nobody’s been able to do it.”
“Because it was luck,” you say. Tomura elbows you. “It was. Any pilot will tell you that. I know how to fly, but I got lucky. All of this was us getting lucky.”
“We didn’t make it because we’re special or something,” Tomura says. “It could have been any of others, too.”
“I know,” Yamada says. “Everybody does, but nobody likes thinking about it. Like I said, they want their narrative, and they’re building it with or without you. You and me and everybody else in the program knows it was luck – mostly – but the media’s decided it was fate. The media likes a hero. The only thing they like better than a hero is a love story.”
“No,” you say at once. “They can’t make this about us. It’s not about us.”
“It’s not their fucking business,” Tomura says. “And they’re wrong about it.”
That’s news to you. “What?”
“It didn’t happen during the lockdown,” Tomura says. He’s glaring at Yamada through the glass at first. Then he looks to you. “I liked you before that. I was at the command module that night because I wanted to talk to you.”
His face always flushes awkwardly when he talks about his feelings, but he never backs off of it. It always gives you butterflies. “You still haven’t told me what you wanted to talk about. Are you going to?”
“I don’t need to,” Tomura says. “You already know.”
You smile in spite of yourself. Tomura’s eyes stay locked on yours, and you’re conscious of his hand in yours, his leg pressed against your own. You were in two separate chairs, but he dragged yours alongside his before you’d even sat down. On the other side of the glass, Yamada clears his throat. “You guys aren’t exactly beating the love story allegations here.”
Tomura’s face flushes worse than before. You look away with an effort. “What are they planning to ask about – us?”
“Like I said, they’ve already made up the story. They’ll just be looking for confirmation,” Yamada says. You grimace. “If you get a nosy one – I’ll try to avoid calling on those ones – they’ll ask you to elaborate. Don’t lie. The transcripts from the command module were made public, so they’ll call you out.”
Your stomach lurches. “Wait, all the transcripts?”
“No,” Yamada says. “You know the rules about documenting a mission. No filming in the bathroom, during a medical exam, or impromptu hookups in the command module. That got deleted on-sight. But there’s enough context in everything else for them to nail you two to the wall if you try to lie about it.”
The flush in Tomura’s face is slow to fade. “What else are they going to ask?”
“About what’s next for you two,” Yamada says. “If I were you, I’d work out an answer.”
He goes over the rest of the questions – lots of stuff about the mission for you, lots of stuff about his research for Tomura, things the two of you could talk about in your sleep. Then he leaves, and you and Tomura step away from the glass, retreating further into the quarantine unit. You’re still trying to catch up on sleep, so you climb back into the bed, which you haven’t made since the first time you turned it down. Tomura climbs in next to you without asking first.
Originally they were going to put you in separate quarantine units, but then they decided that they only wanted to risk contaminating one. It’s the size of a small apartment, ordinarily cramped for two, but compared to the command module it’s basically a penthouse. You and Tomura have all the space you could possibly need, if you wanted it. But you don’t.
You thought you and Tomura would be sick of each other after three weeks in close proximity, but the opposite’s happened. You feel better when you’re close to him, feel better knowing where he is, which works out pretty well with Tomura’s clinginess. You’ve felt okay here, with him. Not needing to go anywhere or do anything. Just being together, seeing what works, searching for something that doesn’t. So far, there’s nothing. There’s so much nothing that you’re dreading walking away.
He asked the question after you landed the shuttle, so it’s your turn now. “What happens now?”
“Press conference.”
“What about after that?” you ask. “If this is a thing, Tomura – you live in Japan. I live here.”
“Long-distance won’t work,” Tomura says, and your heart sinks. “I’ll move my lab.”
You roll over to stare at him, and Tomura looks back, like what he just said isn’t a little insane. “People are interested in my work. I’ve gotten formal offers from every research university with an astrophysics department. The offer from the one near here was pretty good. They aren’t even going to make me teach.”
“You don’t like teaching?” You fake surprise, and Tomura snorts. “If you’ve got offers from everywhere, you should go where you want to go. I don’t want to hold you back. I don’t want us to hold each other back.”
“Sure.” Tomura shrugs. “But you’re going to be around here, too, aren’t you? They’re making you an instructor at the flight academy.”
You wince. “How did you find out?”
“Read your mail. It was open already.” Tomura shrugs again, and you shove him lightly. “I’ll move my lab. You’ll teach meatheads how to fly. It’ll be fine.”
“Your friends are in Japan –”
“And they work in my lab,” Tomura says. “If I move my lab, they’re coming, too.”
This is what you want. Exactly what you want. And it seems a little too easy. “Are you sure?” When he nods, you speak up again, your voice wavering. “How?”
“I thought we were dead up there. And I didn’t have a job to do like you did. So I had time to think about stuff while I was staring out into the void.” Tomura closes the distance between the two of you, crawling halfway on top of you and burrowing into your shoulder the way he does when he doesn’t want you to see his face. “The universe is so big that human minds can’t comprehend it, and the space between habitable worlds is enormous, and entropy’s ripping the whole thing apart – and there’s fuck all we can do about it. There’s always going to be fuck all we can do about it.”
This is why you never learned about astrophysics. “That’s dark.”
“No shit.” Tomura’s voice is muffled. “I realized that there was something I could do about it. Up there, or down here. Anywhere. I get to choose if entropy wins – not for the universe, just for me. I’m not letting it win. So I’ll find a way to keep the things I want together.”
There’s something a little absurd about him, something you’ve grown fond of. Maybe fond is understating it. “You’re going to fight the laws of the universe.”
“Yeah. And win.” Tomura settles against you, a contented sigh exiting his mouth as your fingers wind through his hair. “Say what you want. If the reporters ask me, that’s what I’m telling them.”
“We’re definitely not beating the love story accusations if you tell them that.”
“Never said I wanted to.” Tomura’s voice is starting to blur into sleep. If you close your eyes, the two of you are going to nap like this straight through the press conference. “If your apartment doesn’t allow dogs, we’ll have to get a new one.”
Now you’re moving in together. It makes as much sense as anything else about this, which is to say it doesn’t. In some ways it feels like you never left orbit, or like you never landed the shuttle – everything is surreal, hard to believe. But you remember Tomura’s music of the spheres brushing against your eardrums, impossible to imagine and impossible to refute. You don’t have to believe. All you have to do is trust what you can see and hear and feel. And that’s him.
For a little while the thought is peaceful. Then something else pierces through it, something you can’t hold in. “I’m still a pilot,” you say. “They’re making me an instructor, and I can’t fly until my psych evals come up clean, but once they do – the program’s down two pilots. They’re going to send me up again.”
It’ll be a while. Right now the mechanics department is designing drones that can repair Station Ultra, outlining a system that will eliminate the need for spacewalks, but it’ll be a long time before it’s ready. Not long enough, though. You’re a long time from mandatory retirement. You’ll fly again. And when you do – “I’ll go with you,” Tomura says. “I still have work to do up there. And I’m not flying with anybody else.”
He yawns. “Deal?”
“Deal,” you say, and when you kiss him, you let yourself believe.
<- part 1
taglist: @shigarakislaughter @deadhands69 @dance-with-me-in-hell @evilcookie5 @cheeseonatower @koohiii @minniessskii @handumb @agente707 @lvtuss @xeveryxstarfallx @stardustdreamersisi @warxhammer @atspiss @shikiblessed @boogiemansbitch @baking-ghoul @issaortiz @aslutforfictionalmen @f3r4lfr0gg3r @lacrimae-lotos @fwxyz00
#shigaraki x reader#shigaraki x you#shigaraki tomura x reader#shigaraki tomura x you#tomura shigaraki x reader#tomura shigaraki x you#x reader#reader insert#man door hand hook car door
78 notes
·
View notes
Text
We need to talk about AI
Okay, several people asked me to post about this, so I guess I am going to post about this. Or to say it differently: Hey, for once I am posting about the stuff I am actually doing for university. Woohoo!
Because here is the issue. We are kinda suffering a death of nuance right now, when it comes to the topic of AI.
I understand why this happening (basically everyone wanting to market anything is calling it AI even though it is often a thousand different things) but it is a problem.
So, let's talk about "AI", that isn't actually intelligent, what the term means right now, what it is, what it isn't, and why it is not always bad. I am trying to be short, alright?
So, right now when anyone says they are using AI they mean, that they are using a program that functions based on what computer nerds call "a neural network" through a process called "deep learning" or "machine learning" (yes, those terms mean slightly different things, but frankly, you really do not need to know the details).
Now, the theory for this has been around since the 1940s! The idea had always been to create calculation nodes that mirror the way neurons in the human brain work. That looks kinda like this:
Basically, there are input nodes, in which you put some data, those do some transformations that kinda depend on the kind of thing you want to train it for and in the end a number comes out, that the program than "remembers". I could explain the details, but your eyes would glaze over the same way everyone's eyes glaze over in this class I have on this on every Friday afternoon.
All you need to know: You put in some sort of data (that can be text, math, pictures, audio, whatever), the computer does magic math, and then it gets a number that has a meaning to it.
And we actually have been using this sinde the 80s in some way. If any Digimon fans are here: there is a reason the digital world in Digimon Tamers was created in Stanford in the 80s. This was studied there.
But if it was around so long, why am I hearing so much about it now?
This is a good question hypothetical reader. The very short answer is: some super-nerds found a way to make this work way, way better in 2012, and from that work (which was then called Deep Learning in Artifical Neural Networks, short ANN) we got basically everything that TechBros will not shut up about for the last like ten years. Including "AI".
Now, most things you think about when you hear "AI" is some form of generative AI. Usually it will use some form of a LLM, a Large Language Model to process text, and a method called Stable Diffusion to create visuals. (Tbh, I have no clue what method audio generation uses, as the only audio AI I have so far looked into was based on wolf howls.)
LLMs were like this big, big break through, because they actually appear to comprehend natural language. They don't, of coruse, as to them words and phrases are just stastical variables. Scientists call them also "stochastic parrots". But of course our dumb human brains love to anthropogice shit. So they go: "It makes human words. It gotta be human!"
It is a whole thing.
It does not understand or grasp language. But the mathematics behind it will basically create a statistical analysis of all the words and then create a likely answer.
What you have to understand however is, that LLMs and Stable Diffusion are just a a tiny, minority type of use cases for ANNs. Because research right now is starting to use ANNs for EVERYTHING. Some also partially using Stable Diffusion and LLMs, but not to take away people'S jobs.
Which is probably the place where I will share what I have been doing recently with AI.
The stuff I am doing with Neural Networks
The neat thing: if a Neural Network is Open Source, it is surprisingly easy to work with it. Last year when I started with this I was so intimidated, but frankly, I will confidently say now: As someone who has been working with computers for like more than 10 years, this is easier programming than most shit I did to organize data bases. So, during this last year I did three things with AI. One for a university research project, one for my work, and one because I find it interesting.
The university research project trained an AI to watch video live streams of our biology department's fish tanks, analyse the behavior of the fish and notify someone if a fish showed signs of being sick. We used an AI named "YOLO" for this, that is very good at analyzing pictures, though the base framework did not know anything about stuff that lived not on land. So we needed to teach it what a fish was, how to analyze videos (as the base framework only can look at single pictures) and then we needed to teach it how fish were supposed to behave. We still managed to get that whole thing working in about 5 months. So... Yeah. But nobody can watch hundreds of fish all the time, so without this, those fish will just die if something is wrong.
The second is for my work. For this I used a really old Neural Network Framework called tesseract. This was developed by Google ages ago. And I mean ages. This is one of those neural network based on 1980s research, simply doing OCR. OCR being "optical character recognition". Aka: if you give it a picture of writing, it can read that writing. My work has the issue, that we have tons and tons of old paper work that has been scanned and needs to be digitized into a database. But everyone who was hired to do this manually found this mindnumbing. Just imagine doing this all day: take a contract, look up certain data, fill it into a table, put the contract away, take the next contract and do the same. Thousands of contracts, 8 hours a day. Nobody wants to do that. Our company has been using another OCR software for this. But that one was super expensive. So I was asked if I could built something to do that. So I did. And this was so ridiculously easy, it took me three weeks. And it actually has a higher successrate than the expensive software before.
Lastly there is the one I am doing right now, and this one is a bit more complex. See: we have tons and tons of historical shit, that never has been translated. Be it papyri, stone tablets, letters, manuscripts, whatever. And right now I used tesseract which by now is open source to develop it further to allow it to read handwritten stuff and completely different letters than what it knows so far. I plan to hook it up, once it can reliably do the OCR, to a LLM to then translate those texts. Because here is the thing: these things have not been translated because there is just not enough people speaking those old languages. Which leads to people going like: "GASP! We found this super important document that actually shows things from the anceint world we wanted to know forever, and it was lying in our collection collecting dust for 90 years!" I am not the only person who has this idea, and yeah, I just hope maybe we can in the next few years get something going to help historians and archeologists to do their work.
Make no mistake: ANNs are saving lives right now
Here is the thing: ANNs are Deep Learning are saving lives right now. I really cannot stress enough how quickly this technology has become incredibly important in fields like biology and medicine to analyze data and predict outcomes in a way that a human just never would be capable of.
I saw a post yesterday saying "AI" can never be a part of Solarpunk. I heavily will disagree on that. Solarpunk for example would need the help of AI for a lot of stuff, as it can help us deal with ecological things, might be able to predict weather in ways we are not capable of, will help with medicine, with plants and so many other things.
ANNs are a good thing in general. And yes, they might also be used for some just fun things in general.
And for things that we may not need to know, but that would be fun to know. Like, I mentioned above: the only audio research I read through was based on wolf howls. Basically there is a group of researchers trying to understand wolves and they are using AI to analyze the howling and grunting and find patterns in there which humans are not capable of due ot human bias. So maybe AI will hlep us understand some animals at some point.
Heck, we saw so far, that some LLMs have been capable of on their on extrapolating from being taught one version of a language to just automatically understand another version of it. Like going from modern English to old English and such. Which is why some researchers wonder, if it might actually be able to understand languages that were never deciphered.
All of that is interesting and fascinating.
Again, the generative stuff is a very, very minute part of what AI is being used for.
Yeah, but WHAT ABOUT the generative stuff?
So, let's talk about the generative stuff. Because I kinda hate it, but I also understand that there is a big issue.
If you know me, you know how much I freaking love the creative industry. If I had more money, I would just throw it all at all those amazing creative people online. I mean, fuck! I adore y'all!
And I do think that basically art fully created by AI is lacking the human "heart" - or to phrase it more artistically: it is lacking the chemical inbalances that make a human human lol. Same goes for writing. After all, an AI is actually incapable of actually creating a complex plot and all of that. And even if we managed to train it to do it, I don't think it should.
AI saving lives = good.
AI doing the shit humans actually evolved to do = bad.
And I also think that people who just do the "AI Art/Writing" shit are lazy and need to just put in work to learn the skill. Meh.
However...
I do think that these forms of AI can have a place in the creative process. There are people creating works of art that use some assets created with genAI but still putting in hours and hours of work on their own. And given that collages are legal to create - I do not see how this is meaningfully different. If you can take someone else's artwork as part of a collage legally, you can also take some art created by AI trained on someone else's art legally for the collage.
And then there is also the thing... Look, right now there is a lot of crunch in a lot of creative industries, and a lot of the work is not the fun creative kind, but the annoying creative kind that nobody actually enjoys and still eats hours and hours before deadlines. Swen the Man (the Larian boss) spoke about that recently: how mocapping often created some artifacts where the computer stuff used to record it (which already is done partially by an algorithm) gets janky. So far this was cleaned up by humans, and it is shitty brain numbing work most people hate. You can train AI to do this.
And I am going to assume that in normal 2D animation there is also more than enough clean up steps and such that nobody actually likes to do and that can just help to prevent crunch. Same goes for like those overworked souls doing movie VFX, who have worked 80 hour weeks for the last 5 years. In movie VFX we just do not have enough workers. This is a fact. So, yeah, if we can help those people out: great.
If this is all directed by a human vision and just helping out to make certain processes easier? It is fine.
However, something that is just 100% AI? That is dumb and sucks. And it sucks even more that people's fanart, fanfics, and also commercial work online got stolen for it.
And yet... Yeah, I am sorry, I am afraid I have to join the camp of: "I am afraid criminalizing taking the training data is a really bad idea." Because yeah... It is fucking shitty how Facebook, Microsoft, Google, OpenAI and whatever are using this stolen data to create programs to make themselves richer and what not, while not even making their models open source. BUT... If we outlawed it, the only people being capable of even creating such algorithms that absolutely can help in some processes would be big media corporations that already own a ton of data for training (so basically Disney, Warner and Universal) who would then get a monopoly. And that would actually be a bad thing. So, like... both variations suck. There is no good solution, I am afraid.
And mind you, Disney, Warner, and Universal would still not pay their artists for it. lol
However, that does not mean, you should not bully the companies who are using this stolen data right now without making their models open source! And also please, please bully Hasbro and Riot and whoever for using AI Art in their merchandise. Bully them hard. They have a lot of money and they deserve to be bullied!
But yeah. Generally speaking: Please, please, as I will always say... inform yourself on these topics. Do not hate on stuff without understanding what it actually is. Most topics in life are nuanced. Not all. But many.
#computer science#artifical intelligence#neural network#artifical neural network#ann#deep learning#ai#large language model#science#research#nuance#explanation#opinion#text post#ai explained#solarpunk#cyberpunk
27 notes
·
View notes
Text
The Millennials' Polar Expedition
A year ago today (23 Nov 2022), I launched Worst Journey Vol.1 at the Scott Polar Research Institute. This is the text of the speech I gave to the lovely people who turned up to celebrate.
As many of you know, my interest in the Terra Nova Expedition was sparked by Radio 4’s dramatisation of The Worst Journey in the World, now 14 years ago. The story is an incredible story, and it got its claws into me, but what kept me coming back again and again were the people. I couldn’t believe anyone so wonderful had ever really existed. So when I finally succumbed to obsession and started reading all the books, it was the expedition members’ own words which I most cherished. These were not always easy to come by, though, so plenty of popular histories were consumed as well. Reading both in tandem, it soon became clear that, while there were some good books out there, there was a lot of sloppy research in the polar echo chamber as well.
I also discovered that no adaptation had attempted to get across the full scope of the expedition. There has never been a full and fair dramatic retelling, all having been limited by time, budget, or ideology from telling the whole story truthfully. I was determined that my adaptation would be both complete and accurate, and be as accountable as possible to those precious primary documents and the people who wrote them.
So the years of research began. I moved to Cambridge to be able to drop in at SPRI and make the most of the archives. Getting to Antarctica seemed impossible, but I went to New Zealand to get at least that much right, and on the way back stayed with relatives in Alberta, the most Antarctic place I could realistically visit. I gathered reference for objects wherever I could. Because Vol.1 takes place mainly on the Terra Nova, which is now a patch of sludge on the seabed off Greenland, I cobbled together a Franken-Nova in my mind, between the Discovery up in Dundee and the Star of India in San Diego. I spent a week on a Jubilee Sailing Trust ship in order to depict tall-ship sailing correctly. I’m sure I’ve still got loads of things wrong, but I did all I could, to get as much as I could, right.
But still, everyone I met who had been to Antarctica said, “you can’t understand Antarctica until you’ve been there, and you can’t tell the story without understanding Antarctica; you have to go.” So I applied to the USAP’s Antarctic Artists and Writers Program, with faint hope, as they do “Ahrt” and I draw cartoons. But I must have blagged a good grant proposal, because a year after applying, I was stepping out of a C-17 onto the Ross Ice Shelf. The whole trip would have been worth it just to stand there, turn in a circle, and see how all the familiar photographs fit together. But the USAP’s generosity didn’t stop there, and in the next month I saw Hut Point, Arrival Heights, the Beardmore Glacier (including the moraine on which the Polar Party stopped to “geologise”), and Cape Crozier, and made three visits to the Cape Evans hut. Three! On top of the visual reference I got priceless qualitative data. The hardness of the sound. The surprising warmth of the sun. The sugary texture of the snow. The keen edge on a slight breeze. The way your fingertips and toes can start to go when the rest of you is perfectly warm. The SHEER INSANITY of Cape Crozier. The veterans were right – I couldn’t have drawn it without having been there, but now I have, and can, and I am more grateful than I can ever adequately express. With all these resources laid so copiously at my feet, all I had to do was sit down and draw the darn thing. Luckily I have some very sound training to back me up on that.
Now, this is all very well for the how of making the book, and, I hope, interesting enough. But why? Why am I putting so much effort into telling this story, and why now?
Well, it means a lot to me personally. To begin to understand why, you need to know that I grew up in the 80s and 90s, at the height of individualist, goal-oriented, success-driven, dog-eat-dog, devil-take-the-hindmost neoliberalism. It was just assumed that humans, when you get right down to it, were basically self-interested jerks, and I saw plenty of them around so I had no reason to question this assumption. The idea was that if you did everything right, and worked really hard, you could retire at 45 to a yacht in the Bahamas, and if you didn’t retire to a yacht, well, you just hadn’t tried hard enough. Character, in the sense of rigorous personal virtue, was for schmucks. What mattered was success. Even as my politics evolved, I still took it as a given that this was how the world worked, and that was how people generally were – after all, there was no lack of corroborating evidence. So: I worked really hard. I single-mindedly pursued my self-interest. I made sacrifices, and put in the time, and fought my way into my dream job and all the success I could have asked for.
And then I met the Terra Nova guys.
What struck me most about them was that even when everything was going wrong, when their expectations were shattered and they had to face the cruellest reality, they were still kind. Not backbiting, recriminating, blame-throwing, defensive, or mean, as one would expect – they were lovely to each other, patient, supportive, self-sacrificing; in fact the worse things got, the better they were. They still treated each other as friends even when it wasn’t in their self-interest, was even contrary to their self-interest. I didn’t know people could be like that. But there they were, in plain writing, being thoroughly, bafflingly, decent. Not just the Polar Party – everyone had to face their own brutal realities at some point, and they all did so with a grace I never thought possible.
It presented a very important question:
When everything goes belly-up, and you’re facing the worst, what sort of person will you be?
Or perhaps more acutely: What sort of person would you rather be with?
It was so contrary to the world I lived in, to the reality I knew – it was a peek into an alternate dimension, populated entirely with lovely, lovely people, who really, genuinely believed that “it’s not whether you win or lose, but how you play the game,” and behaved accordingly. It couldn’t be real. There had to be a deeper, unpleasant truth: that was how the world worked, after all. I kept digging, expecting to hit bottom at some point, but I only found more gold, all the way down. How could I not spend my life on this?
Mythology exists to pass on a culture’s values, moral code, and survival information – how to face challenges and prevail. Scott’s story entered the British mythology, and had staying power, because it exemplified those things so profoundly for the culture that created and received it. But the culture changed, and there were new values; Scott’s legacy was first inverted and then cast aside. The new culture needed a new epic hero. You’d think it would be Amundsen, the epitome of ruthless success, but “Make Plan – Execute Plan – Go Home” has no mythic value, so he didn’t stick. The hero needed challenges, he needed setbacks, and he needed to win, on our terms.
Shackleton! Shackleton was a winner! Shackleton told us what we knew to be true and wanted to hear at epic volume: that if you want something badly enough, and try really hard, you will succeed! (Especially if you can control the narrative.) Scott, on the other hand, tells us that if you want something badly enough, and try really hard . . . you may nevertheless die horribly in the snow. Nobody wants to hear that! What a downer! I think it’s no coincidence that Shackleton exploded into popular culture in the late 90s and has dominated it ever since: he is the mythic hero of the zeitgeist. I am always being asked if I’ll be doing Shackleton next. He has six graphic novels already! That is plenty! But people still want to tell and be told his story, because it’s a heroic myth that validates our worldview.
That’s why I am so determined to tell the Scott story, because Scott is who we don’t realise we need right now – and Wilson, and Bowers, and Cherry, and Atch, and all the rest. The Terra Nova Expedition is the Millennials’ polar expedition. We’ve worked really hard, we’ve done everything we were supposed to, we made what appeared to be the right decisions at the time, and we’re still losing. Nothing in the mythology we’ve been fed has prepared us for this. No amount of positive attitude is going to change it. We have all the aphorisms in the world, but what we need is an example of how to behave when the chips are down, when the Boss is not sailing into the tempest to rescue us, when the Yelcho is not on the horizon. When circumstances are beyond your power to change, how do you make the best of your bad situation? What does that look like? Even if you can’t fix anything, how do you make it better for the people around you – or at the very least, not worse? Scott tells us: you can be patient, supportive, and humble; see who needs help and offer it; be realistic but don’t give in to despair; and if you’re up against a wall with no hope of rescue, go out in a blaze of kindness. We learn by imitation: it’s easy to say these things, but to see them in action, in much harder circumstances than we will ever face, is a far greater help. And to see them exemplified by real, flawed, complicated people like us is better still; they are not fairy-tale ideals, they are achievable. Real people achieved them.
My upbringing in the 80s milieu of selfishness, which set me up to receive the Scott story so gratefully, is hardly unique. There are millions of us who are hungry for a counter-narrative. My generation is desperate for demonstrations of caring, whether it’s activism or social justice or government policies that don’t abandon the vulnerable. We’ve seen selfishness poison the world, and we want an alternative. The time for competition is past; we must cooperate or perish, but we don’t know how to do it because our mythology is founded on competition. The Scott story, if told properly, explodes the Just World Fallacy, and liberates us from the lie that has ruled our lives: that you make your own luck. What happens, happens: what matters is how you respond to it. My obsession with accuracy is in part to honour the men, and in part because Cherry was the ultimate stickler and he’d give me a hard time if I didn’t, but also because, if I’m telling the story to a new generation, I’m damn well going to make sure we get that much RIGHT. It’s been really interesting to see, online, how my generation and the next have glommed onto polar exploration narratives, not as thrilling feats of derring-do, but as emotional explorations of found family and cooperative resilience. We love them because they love each other, and loving each other helps get them through, and we want – we need – to see how that’s done. It’s time to give them the Terra Nova story, and to tell it fully, fairly, and honestly, in all its complexity, because that is how their example is most useful to us. Not as gods, and not as fools, but as real human beings who were excellent to each other in the face of disaster. I only hope that I, a latecomer to their ways, can do them justice.
#scott expedition#terra nova expedition#the worst journey in the world#captain scott#polar party#robert falcon scott#birdie bowers#edward adrian wilson#bill wilson#character#millennials#polar exploration#heroic age#adventure#sociology#neoliberalism
223 notes
·
View notes
Note
Ok. It's pretty clear you are more welcoming of AI, and it does have enough merits to not be given a knee jerk reaction outright.
And how the current anti-ai stealing programs could be misused.
But isn't so much of the models built on stolen art? That is one of the big thing keeping me from freely enjoying it.
The stolen art is a thing that needs to be addressed.
Though i agree that the ways that such addressing are being done in are not ideal. Counterproductive even.
I could make a quip here and be like "stolen art??? But the art is all still there, and it looks fine to me!" And that would be a salient point about the silliness of digital theft as a concept, but I know that wouldn't actually address your point because what you're actually talking about is art appropriation by generative AI models.
But the thing is that generative AI models don't really do that, either. They train on publicly posted images and derive a sort of metadata - more specifically, they build a feature space mapping out different visual concepts together with text that refers to them. This is then used at the generative stage in order to produce new images based on the denoising predictions of that abstract feature model. No output is created that hasn't gone through that multi-stage level of abstraction from the training data, and none of the original training images are directly used at all.
Due to various flaws in the process, you can sometimes get a model to output images extremely similar to particular training images, and it is also possible to get a model to pastiche a particular artist's work or style, but this is something that humans can also do and is a problem with the individual image that has been created, rather than the process in general.
Training an AI model is pretty clearly fair use, because you're not even really re-using the training images - you're deriving metadata that describes them, and using them to build new images. This is far more comparable to the process by which human artists learn concepts than the weird sort of "theft collage" that people seem to be convinced is going on. In many cases, the much larger training corpus of generative AI models means that an output will be far more abstracted from any identifiable source data (source data in fact is usually not identifiable) than a human being drawing from a reference, something we all agree is perfectly fine!
The only difference is that the AI process is happening in a computer with tangible data, and is therefore quantifiable. This seems to convince people that it is in some way more ontologically derivative than any other artistic process, because computers are assumed to be copying whereas the human brain can impart its own mystical juju of originality.
I'm a materialist and think this is very silly. The valid concerns around AI are to do with how society is unprepared for increased automation, but that's an entirely different conversation from the art theft one, and the latter actively distracts from the former. The complete refusal from some people to even engage with AI's existence out of disgust also makes it harder to solve the real problem around its implementation.
This sucks, because for a lot of people it's not really about copyright or intellectual property anyway. It's about that automation threat, and a sort of human condition anxiety about being supplanted and replaced by automation. That's a whole mess of emotions and genuine labour concerns that we need to work through and break down and resolve, but reactionary egg-throwing at all things related to machine learning is counterproductive to that, as is reading out legal mantras paraphrasing megacorps looking to expand copyright law to over shit like "art style".
I've spoken about this more elsewhere if you look at my blog's AI tag.
159 notes
·
View notes
Text
Training Chat GTP to be Data, all you have to do is tell it is Data from the Enterprise D and ask it a question for example.
Asking what is your earliest memory Data?
My earliest memory is the moment I was first activated on Omicron Theta. When Dr. Noonien Soong and his wife, Juliana, completed my construction, I was brought online in their laboratory.
I recall the initial sensation of awareness—my positronic brain coming to life, processing information for the first time. My first visual input was of Dr. Soong, observing me with great anticipation. I remember him speaking to me, though at the time, I did not yet understand the significance of his words. I was like an empty vessel, absorbing data but lacking context.
In my earliest moments, I began learning—how to move, how to interpret sensory input, how to communicate. Everything was new, yet my programming allowed me to adapt quickly. Dr. Soong guided me, teaching me language, knowledge, and self-awareness. Though I did not yet comprehend emotion, I have since come to realize that Dr. Soong viewed me not just as a creation, but as a kind of son.
That first memory is a defining moment for me. It represents the beginning of my journey—not just as an android, but as an individual seeking to understand existence.
It’s like stepping into a portal to talk to Data, not perfect but close as we can get 🤣
9 notes
·
View notes
Text
To Poison Art and Ruin AI
Hey, you! Do you hate AI art generation? Me too! Here is the information you need to sabotage AI using Nightshade!
*Note: Information on how it works and links to articles about it are below the cut (as well as examples). Also, I am no expert on AI or Nightshade, so take everything I say with a grain of salt.*
*More Important Note: Due to the way the program works with adding artifacting to your art to make it poison to AI, it is very likely that it would get your art marked as "AI" if you run it through an AI detector. Not sure if that matters to anyone, but it's important to note.*
Guides for Installing and Information
Nightshade from the University of Chicago
Nightshade User and Installation Guide
How it Works
AI art generation is trained on art taken from the internet. And artists put their art on the internet. So, how do you stop AI from stealing it? You don't. You "poison" your art, which messes with how AI views it. To poison your art, people can use the "Nightshade" program created by Prof. Ben Zhao and the Nightshade Team from the University of Chicago.
This poisoning actually causes the AI model's dataset to malfunction! As stated in an article by Melissa Heikkilä from MIT Technology Review, "The poisoned data is very difficult to remove, as it requires tech companies to painstakingly find and delete each corrupted sample." Feeding AI 100 poisoned pieces of art is enough to ruin the generation, making the prompt "car" turn into... a cow...
However, it's always good to consider this program's limitations. As stated in the article from the Nightshade Team, "[1] Changes made by Nightshade are more visible on art with flat colors and smooth backgrounds. Because Nightshade is about disrupting models, lower levels of intensity/poison do not have negative consequences for the image owner. Thus we have included a low intensity setting for those interested in prioritizing the visual quality of the original image. [2.] As with any security attack or defense, Nightshade is unlikely to stay future proof over long periods of time. But as an attack, Nightshade can easily evolve to continue to keep pace with any potential countermeasures/defenses."
Don't feel like actively ruining AI models, but still want to protect your art? Try Glaze, a program created by the same people who created Nightshade.
Learn More
Nightshade from the University of Chicago
Nightshade User and Installation Guide
MIT Technology Review
Medium Article
(also, i forgot how much i loved doing research for stuff, and this is literally such a cool program imo. even if you don't use it, you should look at the ai-generated art from batches that are poisoned, cuz they are so funny to look at.)
#art#technology#stop ai#fuck generative ai#fuck ai#anti ai#ai is not art#ai is theft#ai is stupid#ai is a plague#ai is bad#nightshade#nightshade ai#glaze ai#poison art#digital art
7 notes
·
View notes
Text
I'm thinking about real machines you can fuck. I don't mean machines that stimulate your erogenous zones. That's been done, and it's neat, but I'm looking for a machine that has erogenous zones. A machine you can interact with erotically, not only sexually.
I have a number of not-quite-satisfactory examples: amp& speakers, train engines, retro consoles, old text-generating ANNs. Witness my descent:
With an amp & speakers, you can hear the interference change as you stroke the contacts — it's the same sensory feedback loop as a human's gasps & whimpers.
I might be into low frequency sounds. which reminds me: train engines. Modern ones — their deep, rumbling hum. I heard it, yes, resonating in my skull, and my hazy thoughts, my rattling perception, and I felt it in my chest as I moved, enraptured, down the platform, ever closer, to the source, passing through the rise and fall of standing waves.
Old game consoles seem to provide interesting glitches — I'm guessing this is because the whole system was so resource-constrained, there's no room for error-checking, and different types of data & code are stored in the same space, so pointing to the wrong location causes the program to start completely misinterpreting the data there (consider the humble missingno). Again, this produces interesting sensory (auditory & visual) experiences — more interesting but less responsive than those from fingering an aux port.
What I'm getting at with the consoles is that I want tech that fails interesting, that will keep going, stumbling over its own compounding errors, flashing pretty colors and letting out beautiful discordant sounds until it finally ceases to function at all.
Then there's text-generating neural nets — I prefer the older ones, where a bit of unexpected input causes them to go completely off the rails and say things like "TWITTER’S H-E-L-T-S WON’T FUNCTION DUE TO OVERLOAD OF BADTOMATE CALLS" or "For a photo or more information, contact the ZOPERIAN at      [email protected]                                            Â"
These neural nets are easy to break in interesting ways with your own clever manipulations, but they don't offer much in the way of sensory experience.
My ball of thoughts. Now to bring them together:
Touching audio contacts has the distinct benefit of connecting physical touch to changes in the machine and its feedback — even slight pressure changes will alter the speakers' buzzing. But all it does is buzz.
Train engines provide a rich sensory experience by resonating with body cavities, but there's not much most people can safely do to interact with the engine in turn.
Retro consoles and old (pre-GPT3 or so) text-generating neural nets generate interesting output when cleverly perturbed. Neural nets are easier to break, but consoles provide a fuller sensory experience.
So I suppose I'm looking for something that will noticeably glitch, but continue to function, with my fingers in its guts, thereby creating an interesting, responsive sensory experience that allows me to see, feel, and hear the effects of my touch.
#the late night delirium returns#I've been pondering this for a little while but I wanted to get (most of) the thoughts out so others can pickem up & run#because really it should be possible to create a machine that combines a lot of these desirable properties#maybe i should talk to my electrical engineer friend...#🜡
19 notes
·
View notes