#feature engineering in machine learning
Explore tagged Tumblr posts
Text
Explore What is Feature Engineering in Machine Learning
Summary: Feature engineering in Machine Learning is pivotal for refining raw data into predictive insights. Techniques like extraction, selection, transformation, and handling of missing data optimise model performance, ensuring accurate predictions. It bridges data intricacies, empowering practitioners to uncover meaningful patterns and drive informed decision-making in various domains.

Introduction
In today's data-driven era, Machine Learning has become pivotal across industries, revolutionising decision-making and automation. Feature Engineering in Machine Learning is at the heart of this transformative technology—an art and science that involves crafting data features to maximise model performance.
This blog explores the crucial role of Feature Engineering in Machine Learning, highlighting its significance in refining raw data into predictive insights. As businesses strive for a competitive edge through data-driven strategies, understanding how feature engineering enhances Machine Learning models becomes paramount.
Join us as we delve into techniques and strategies that empower Data Scientists to harness the full potential of their data.
What is Machine Learning?
Machine Learning is a branch of artificial intelligence (AI) that empowers systems to learn and improve from experience without explicit programming. It revolves around algorithms enabling computers to learn and automatically make decisions based on data. This section provides a concise definition and explores its core principles and the various paradigms under which Machine Learning operates.
Definition and Core Principles
Machine Learning involves algorithms that learn patterns and insights from data to make predictions or decisions. Unlike traditional programming, where rules are explicitly defined, Machine Learning models are trained using vast amounts of data, allowing them to generalise and improve their performance over time.
The core principles include data-driven learning, iterative improvement, and new data adaptation.
Common overview of learning paradigms in Machine Learning are:
Supervised Learning: In supervised learning, algorithms learn from labelled data, where each input-output pair is explicitly provided during training. The goal is to predict outputs for new, unseen data based on patterns learned from the labelled examples.
Unsupervised Learning: Unsupervised learning involves discovering patterns and structures from unlabeled data. Algorithms here aim to find inherent relationships or groupings in the data without explicit guidance.
Reinforcement Learning: This paradigm focuses on learning optimal decision-making through interaction with an environment. Algorithms learn to achieve a goal or maximise a reward by taking actions and receiving feedback or reinforcement signals.
Each paradigm serves distinct purposes in solving different types of problems, contributing to the versatility and applicability of Machine Learning across various fields.
What is Feature Engineering in Machine Learning?
Feature engineering in Machine Learning refers to selecting, transforming, and creating features (variables) from raw data most relevant to predictive modelling tasks. It involves crafting the correct set of input variables to improve model accuracy and performance.
This process aims to highlight meaningful patterns in the data and enhance the predictive power of Machine Learning algorithms.
Importance of Feature Selection and Transformation
Effective feature selection and transformation are crucial steps in the data preprocessing pipeline of Machine Learning. By selecting the most relevant features, the model becomes more focused on the essential patterns within the data, reducing noise and improving generalisation.
Transformation techniques such as normalisation or scaling ensure that features are on a comparable scale, preventing biases in model training. These processes not only streamline the learning process for Machine Learning algorithms but also contribute significantly to the interpretability and efficiency of the models.
Feature engineering bridges raw data and accurate predictions, laying the foundation for successful Machine Learning applications. By carefully curating features through selection and transformation, Data Scientists can uncover deeper insights and build robust models that perform well across diverse datasets and real-world scenarios.
Importance of Feature Engineering in Machine Learning
In Machine Learning, the quality of features—inputs used to train models—profoundly influences their predictive accuracy and performance. Practical feature engineering enhances the interpretability of models and facilitates better decision-making based on data-driven insights.
Impact on Model Performance
Quality features serve as the foundation for Machine Learning algorithms. By carefully selecting and transforming these features, practitioners can significantly improve model performance.
For instance, in a predictive model for customer churn, customer demographics, purchase history, and interaction frequency can be engineered to capture nuanced behaviours that correlate strongly with churn likelihood. This targeted feature engineering refines the model's ability to distinguish between churners and loyal customers and enhances its predictive power.
Examples of Enhanced Predictive Models
Consider a fraud detection system where features like transaction amount, location, and time are engineered to extract additional information, such as transaction frequency patterns or deviations from typical behaviour. By leveraging these engineered features, the model can more accurately identify suspicious activities, reducing false positives and improving overall detection rates.
Investing time and effort in feature engineering is crucial for building robust Machine Learning models that deliver actionable insights and drive informed decision-making across various domains.
Common Techniques in Feature Engineering in Machine Learning

Feature engineering is a cornerstone in developing robust Machine Learning models, where the quality and relevance of features directly impact model performance and predictive accuracy. This section explores several essential techniques in feature engineering that transform raw data into meaningful insights for Machine Learning algorithms.
Feature Extraction
Feature extraction transforms raw data into a structured format that facilitates model learning and improves predictive capabilities. Text mining data is processed to extract critical features such as word frequencies or semantic meanings using natural language processing (NLP) techniques. Similarly, features like edges or textures are extracted to describe visual content in image processing.
Feature Selection
Effective feature selection enhances model performance by focusing on the most relevant features while mitigating computational complexity and overfitting risks. Methods include filter methods that assess feature relevance based on statistical measures, wrapper methods that evaluate subsets based on model performance, and embedded methods that integrate feature selection into model training.
Feature Transformation
Feature transformation techniques preprocess data to improve model interpretability and performance. Normalisation scales numerical features to a standard range, standardisation adjusts features to have a mean of zero and a standard deviation of one, and log transforms adjust skewed data distributions to improve model fit.
Handling Missing Data
Handling missing data is essential to maintain data integrity and ensure robust model performance. Techniques include imputation methods that replace missing values with substitutes like the mean or median, deletion of instances with extensive missing data, and advanced techniques such as predictive modelling to estimate missing values based on other features.
Frequently Asked Questions
What is feature engineering in Machine Learning?
Feature engineering in Machine Learning involves selecting, transforming, and creating data features to enhance model performance and accuracy.
Why is feature engineering important in Machine Learning?
Practical feature engineering refines data insights, improves model interpretability, and boosts predictive accuracy across diverse datasets.
What are standard techniques in feature engineering?
Techniques include feature extraction, selection, transformation, and handling of missing data, all crucial for optimising Machine Learning models.
Conclusion
Feature engineering is indispensable in Machine Learning, transforming raw data into meaningful features that amplify model performance. By meticulously selecting and refining data inputs, practitioners enhance predictive accuracy and ensure robustness across various applications.
The strategic crafting of features improves model efficiency. It facilitates better decision-making through actionable insights derived from comprehensive data analysis. As businesses increasingly rely on data-driven strategies, mastering feature engineering remains a pivotal skill for unlocking the full potential of Machine Learning in solving complex real-world problems.
#feature engineering#feature engineering in machine learning#machine learning#data science#engineering
0 notes
Text
Optimierung der KI-Modellentwicklung durch AutoML-Technologien
In der heutigen datengetriebenen Welt spielt die Entwicklung von Künstlicher Intelligenz (KI) eine zentrale Rolle in der Verbesserung von Geschäftsprozessen und Entscheidungen. AutoML-Technologien (Automated Machine Learning) bieten innovative Ansätze zur Optimierung der KI-Modellentwicklung, indem sie den manuellen Aufwand reduzieren und die Effizienz steigern. In diesem Artikel werden wir…
#Automatisierung#AutoML#Effizienzsteigerung#Feature Engineering#Geschäftsprozesse#Innovation#Innovationen#KI-Modelle#Machine Learning#Random Forests#RPA
0 notes
Text
Feature Engineering Techniques To Supercharge Your Machine Learning Algorithms
Are you ready to take your machine learning algorithms to the next level? If so, get ready because we’re about to dive into the world of feature engineering techniques that will supercharge your models like never before. Feature engineering is the secret sauce behind successful data science projects, allowing us to extract valuable insights and transform raw data into powerful predictors. In this blog post, we’ll explore some of the most effective and innovative techniques that will help you unlock hidden patterns in your datasets, boost accuracy levels, and ultimately revolutionize your machine learning game. So grab a cup of coffee and let’s embark on this exciting journey together!
Introduction to Feature Engineering
Feature engineering is the process of transforming raw data into features that can be used to train machine learning models. In this blog post, we will explore some common feature engineering techniques and how they can be used to improve the performance of machine learning algorithms.
One of the most important aspects of feature engineering is feature selection, which is the process of selecting the most relevant features from a dataset. This can be done using a variety of methods, including manual selection, statistical methods, and machine learning algorithms.
Once the relevant features have been selected, they need to be transformed into a format that can be used by machine learning algorithms. This may involve scaling numerical values, encoding categorical values as integers, or creating new features based on existing ones.
It is often necessary to split the dataset into training and test sets so that the performance of the machine learning algorithm can be properly evaluated.
What is a Feature?
A feature is a characteristic or set of characteristics that describe a data point. In machine learning, features are typically used to represent data points in a dataset. When choosing features for a machine learning algorithm, it is important to select features that are relevant to the task at hand and that can be used to distinguish between different classes of data points.
There are many different ways to engineer features, and the approach that is taken will depend on the type of data being used and the goal of the machine learning algorithm. Some common techniques for feature engineering include:
– Extracting features from text data using natural language processing (NLP) techniques – Creating new features by combining existing features (e.g., creating interaction terms) – Transforming existing features to better suit the needs of the machine learning algorithm (e.g., using logarithmic transformations for numerical data) – Using domain knowledge to create new features that capture important relationships in the data
How Does Feature Engineering Help Boost ML Algorithms?
Feature engineering is the process of using domain knowledge to extract features from raw data that can be used to improve the performance of machine learning algorithms. This process can be used to create new features that better represent the underlying data or to transform existing features so that they are more suitable for use with machine learning algorithms.
The benefits of feature engineering can be seen in both improved model accuracy and increased efficiency. By carefully crafting features, it is possible to reduce the amount of data required to train a machine learning algorithm while also increasing its accuracy. In some cases, good feature engineering can even allow a less powerful machine learning algorithm to outperform a more complex one.
There are many different techniques that can be used for feature engineering, but some of the most common include feature selection, feature transformation, and dimensionality reduction. Feature selection involves choosing which features from the raw data should be used by the machine learning algorithm. Feature transformation involves transforming or changing the values of existing features so that they are more suitable for use with machine learning algorithms. Dimensionality reduction is a technique that can be used to reduce the number of features in the data by combining or eliminating features that are similar or redundant.
Each of these techniques has its own strengths and weaknesses, and there is no single best approach for performing feature engineering. The best approach depends on the specific dataset and machine learning algorithm being used. In general, it is important to try out different techniques and see which ones work best for your particular application.
Types of Feature Engineering Techniques
There are many different types of feature engineering techniques that can be used to improve the performance of machine learning algorithms. Some of the most popular techniques include:
1. Data preprocessing: This technique involves cleaning and preparing the data before it is fed into the machine learning algorithm. This can help to improve the accuracy of the algorithm by removing any noisy or irrelevant data.
2. Feature selection: This technique involves selecting the most relevant features from the data that will be used by the machine learning algorithm. This can help to improve the accuracy of the algorithm by reducing the amount of data that is processed and making sure that only the most important features are used.
3. Feature extraction: This technique involves extracting new features from existing data. This can help to improve the accuracy of the algorithm by providing more information for the algorithm to learn from.
4. Dimensionality reduction: This technique reduces the number of features that are used by the machine learning algorithm. This can help to improve the accuracy of the algorithm by reducing complexity and making sure that only the most important features are used.
– Data Preprocessing
Data preprocessing is a critical step in any machine learning pipeline. It is responsible for cleaning and formatting the data so that it can be fed into the model.
There are a number of techniques that can be used for data preprocessing, but some are more effective than others. Here are a few of the most popular methods:
– Standardization: This technique is used to rescale the data so that it has a mean of 0 and a standard deviation of 1. This is often done before feeding the data into a machine learning algorithm, as it can help the model converge faster.
– Normalization: This technique is used to rescale the data so that each feature is in the range [0, 1]. This is often done before feeding the data into a neural network, as it can help improve convergence.
– One-hot encoding: This technique is used to convert categorical variables into numerical ones. This is often done before feeding the data into a machine learning algorithm, as many models cannot handle categorical variables directly.
– Imputation: This technique is used to replace missing values in the data with something else (usually the mean or median of the column). This is often done before feeding the data into a machine learning algorithm, as many models cannot handle missing values directly.
– Feature Selection
There are a variety of feature selection techniques that can be used to improve the performance of machine learning algorithms. Some common methods include:
-Filter Methods: Filter methods are based on ranking features according to some criterion and then selecting a subset of the most relevant features. Common criteria used to rank features include information gain, mutual information, and chi-squared statistics.
-Wrapper Methods: Wrapper methods use a machine learning algorithm to evaluate the performance of different feature subsets and choose the best performing subset. This can be computationally expensive but is often more effective than filter methods.
-Embedded Methods: Embedded methods combine feature selection with the training of the machine learning algorithm. The most common embedded method is regularization, which penalizes certain parameters in the model if they are not relevant to the prediction task.
– Feature Transformation
Feature engineering is the process of creating new features from existing data. This can be done by combining different features, transforming features, or creating new features from scratch.
Feature engineering is a critical step in machine learning because it can help improve the performance of your algorithms. In this blog post, we will discuss some common feature engineering techniques that you can use to supercharge your machine learning algorithms.
One common technique for feature engineering is feature transformation. This involves transforming existing features to create new ones. For example, you could transform a feature such as “age” into a new feature called “age squared”. This would be useful if you were trying to predict something like life expectancy, which often increases with age but then levels off at an older age.
Another common technique is feature selection, which is the process of choosing which features to include in your model. This can be done manually or automatically using a variety of methods such as decision trees or Genetic Algorithms.
Once you have decided which features to include in your model, you may want to perform dimensionality reduction to reduce the number of features while still retaining as much information as possible. This can be done using techniques such as Principal Component Analysis (PCA) or Linear Discriminant Analysis (LDA).
You may also want to standardize your data before feeding it into your machine learning algorithm. Standardization involves rescaling the data so that it has a mean
– Generating Synthetic Features
Generating synthetic features is a great way to supercharge your machine learning algorithms. This technique can be used to create new features that are not present in the original data set. This can be done by combining existing features, or by using a variety of techniques to generate new features from scratch.
This technique is often used in conjunction with other feature engineering techniques, such as feature selection and feature extraction. When used together, these techniques can greatly improve the performance of your machine learning algorithms.
Examples of Successful Feature Engineering Projects
1. One of the most well-known examples of feature engineering is the Netflix Prize. In order to improve their movie recommendation system,Netflix released a dataset of 100 million ratings and allowed anyone to compete to find the best algorithm. The grand prize was awarded to a team that used a combination of features, including movie genres, release year, and average rating, to improve the accuracy of predictions by 10%.
2. Another example is Kaggle’s Merck Millipore Challenge, which asked participants to predict the binding affinity of small molecules to proteins. The winning team used a variety of features, including chemical structure data and protein sequence data, to achieve an accuracy of over 99%.
3. In the Google Brain Cat vs. Dog Challenge, participants were tasked with using machine learning to distinguish between pictures of cats and dogs. The winning team used a number of different features, such as color histograms and edge detection, to achieve an accuracy of over 96%.
Challenges Faced While Doing Feature Engineering
The biggest challenge when it comes to feature engineering is figuring out which features will actually be useful in predicting the target variable. There’s no easy answer to this question, and it often requires a lot of trial and error. Additionally, some features may be very time-consuming and expensive to create, so there’s a trade-off between accuracy and practicality.
Another challenge is dealing with missing data. This can be a issue when trying to create new features, especially if those features are based on other features that have missing values. One way to deal with this is to impute the missing values, but this can introduce bias if not done properly.
Sometimes the relationships between features and the target variable can be non-linear, meaning that standard linear methods of feature engineering won’t work. In these cases, it’s necessary to get creative and come up with custom transformation methods that capture the complex relationships.
Conclusion
Feature engineering is a powerful tool that can be used to optimize the performance of your machine learning algorithms. By utilizing techniques such as feature selection, dimensionality reduction and data transformation, you can drastically improve the accuracy and efficiency of your models. With these tools in hand, you will be well-equipped to tackle any machine learning challenge with confidence.
0 notes
Text
Yandere! Bad Guy x Reader
I am currently in my Natural Born Killers nostalgia, and so I'm borrowing its vibes and bringing you this: a bad-to-the-bone, rock-and-roll attitude yandere who constantly makes you question your own morality. Featuring an old OC!
Content: gender neutral reader, violence, murder, male yandere
He fell in love with you at first sight. A goody two shoes, quiet and obedient. Shy. Oh, terribly shy. You couldn't even meet his eyes. He knew you were the kind others would step on, take advantage of. But there was more to it, much more to uncover.
Who was it? A relative, a friend, a coworker? You know, that person holding you back, keeping you in your place. The one who'd always make you feel small and insignificant. The one who would always find something to criticize. How did it feel when you found them on the ground, bashed in and bloodied up? He was standing above the lifeless body, catching his breath, a cocky smile plastered on his face. His way of courting you.
He looked so tall in that moment, towering above your hesitant self, his gaze of a confidence and intensity you'd never known before. "Well? What are you waiting for? Get in", he said, gesturing towards a convertible he most likely stole earlier that day. What possessed you in that moment to join him without delay? Was it his charisma? Or did you know in the depth of your soul that he wouldn't take no for an answer?
You see, he's known it from the beginning. Someone like you needs someone like him. You’re a sweet little lamb lost among the wolves. The world would eat you right up if you were left by yourself. But now you have him. And he won't let his precious prey get away. Oh, dear, no. If he wants something, he gets it. And he's never wanted anything more than you.
"You didn't...even tell me your name", you sheepishly spoke up from the passenger seat, trying to keep your mind away from the crime you'd just witnessed. "Just call me Tig", he said casually with a yawn, speeding away. "Won't you be in trouble, Tig? Why would you even kill-" you tried to reason. "What kinda question is that? They treated you like shit and it pissed me off." He glanced at you with a frown, taking another drag off his cigarette. "You're mine now, so whatever happens to you is my business. Got it?" You just stared. Was that his way of asking you out?
Tig lives by his own rules, as you quickly learned from becoming his companion. Always on the run, indifferent to the world. For the most part, to your surprise, he's well-behaved. If people don't mess with him, he doesn't mess with them. Simple as that.
Anything involving you, however, sets him off terribly. Like a rabid, ferocious guard dog, he's ready to pounce on whoever approaches you the wrong way. Last week you stopped at a highway diner for coffee, and on your way back to your table, you jokingly pulled a clumsy dance move to the song playing from the speakers. Tig observed you with an amused smile, sipping from his cup. A passerby joined you, resting his arm on your waist flirtatiously. Tig's smile dropped in an instant, and next thing you knew, the whole place was splattered in blood. No one made it out.
"I didn't even finish my coffee", you whined, already used to the occasional massacre. The man hopped behind the counter and threw on a bloodied cap. "What will it be, sir/ma'am?" he pretended, dangling a takeaway cup and starting the espresso machine. "I never told you, but I used to be a barista", he declared proudly. An entirely different person from the unhinged killer you witnessed minutes ago. "What? You said you were a mechanic", you questioned with raised brows. "That's also true. I'm a jack of all trades, I suppose. You know what I'm best at, though?" He lowered himself until his forehead touched yours. "Pleasing you."
The man is romantic in his own way. He twists the key, and the engine stops. You follow him out of the car in confusion. "Why did we stop here?" He briefly lifts himself up onto the tall fence securing the bridge, and inhales deeply. "Isn't it a nice view?" he says, nodding ahead. It is a scenic sight, sure. The river slithers along the lush valley, and the setting sun gives everything a dramatic tint. "Give me your hand", he suddenly demands as he goes to grab it himself. Before you can ask for an explanation, he quickly drags a blade across your palm, and you wince in pain. He repeats the gesture with his own hand, locking his fingers with yours over the rail. You watch as fresh blood trails along your skin, eventually falling into droplets and vanishing into the river. "Now we're going to be everywhere", he remarks playfully. "Okay, but what was the point?" you insist, a little baffled.
"Isn't it obvious? Maybe this will help", he continues, procuring a ring from his pocket. "I'm saying I want to marry you, (Y/N)."
You open your mouth to answer, but he already slides it up your finger, eyes glimmering in excitement.
"You're never getting away from me, love."
#yes I'm advertising the movie again because it's a CLASSIC#yandere#yandere x reader#yandere x darling#yandere x you#yandere headcanons#yandere scenarios#yandere imagines#yandere killer#yandere delinquent#yandere oc#yandere oc x reader#yandere male#yandere male x reader#yandere boyfriend#male yandere#doodle#my art#yandere art#tig
1K notes
·
View notes
Text
Reverse engineers bust sleazy gig work platform

If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/11/23/hack-the-class-war/#robo-boss
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
Supposedly, these lines were included in a 1979 internal presentation at IBM; screenshots of them routinely go viral:
https://twitter.com/SwiftOnSecurity/status/1385565737167724545?lang=en
The reason for their newfound popularity is obvious: the rise and rise of algorithmic management tools, in which your boss is an app. That IBM slide is right: turning an app into your boss allows your actual boss to create an "accountability sink" in which there is no obvious way to blame a human or even a company for your maltreatment:
https://profilebooks.com/work/the-unaccountability-machine/
App-based management-by-bossware treats the bug identified by the unknown author of that IBM slide into a feature. When an app is your boss, it can force you to scab:
https://pluralistic.net/2023/07/30/computer-says-scab/#instawork
Or it can steal your wages:
https://pluralistic.net/2023/04/12/algorithmic-wage-discrimination/#fishers-of-men
But tech giveth and tech taketh away. Digital technology is infinitely flexible: the program that spies on you can be defeated by another program that defeats spying. Every time your algorithmic boss hacks you, you can hack your boss back:
https://pluralistic.net/2022/12/02/not-what-it-does/#who-it-does-it-to
Technologists and labor organizers need one another. Even the most precarious and abused workers can team up with hackers to disenshittify their robo-bosses:
https://pluralistic.net/2021/07/08/tuyul-apps/#gojek
For every abuse technology brings to the workplace, there is a liberating use of technology that workers unleash by seizing the means of computation:
https://pluralistic.net/2024/01/13/solidarity-forever/#tech-unions
One tech-savvy group on the cutting edge of dismantling the Torment Nexus is Algorithms Exposed, a tiny, scrappy group of EU hacker/academics who recruit volunteers to reverse engineer and modify the algorithms that rule our lives as workers and as customers:
https://pluralistic.net/2022/12/10/e2e/#the-censors-pen
Algorithms Exposed have an admirable supply of seemingly boundless energy. Every time I check in with them, I learn that they've spun out yet another special-purpose subgroup. Today, I learned about Reversing Works, a hacking team that reverse engineers gig work apps, revealing corporate wrongdoing that leads to multimillion euro fines for especially sleazy companies.
One such company is Foodinho, an Italian subsidiary of the Spanish food delivery company Glovo. Foodinho/Glovo has been in the crosshairs of Italian labor enforcers since before the pandemic, racking up millions in fines – first for failing to file the proper privacy paperwork disclosing the nature of the data processing in the app that Foodinho riders use to book jobs. Then, after the Italian data commission investigated Foodinho, the company attracted new, much larger fines for its out-of-control surveillance conduct.
As all of this was underway, Reversing Works was conducting its own research into Glovo/Foodinho's app, running it on a simulated Android handset inside a PC so they could peer into app's data collection and processing. They discovered a nightmarish world of pervasive, illegal worker surveillance, and published their findings a year ago in November, 2023:
https://www.etui.org/sites/default/files/2023-10/Exercising%20workers%20rights%20in%20algorithmic%20management%20systems_Lessons%20learned%20from%20the%20Glovo-Foodinho%20digital%20labour%20platform%20case_2023.pdf
That report reveals all kinds of extremely illegal behavior. Glovo/Foodinho makes its riders' data accessible across national borders, so Glovo managers outside of Italy can access fine-grained surveillance information and sensitive personal information – a major data protection no-no.
Worse, Glovo's app embeds trackers from a huge number of other tech platforms (for chat, analytics, and more), making it impossible for the company to account for all the ways that its riders' data is collected – again, a requirement under Italian and EU data protection law.
All this data collection continues even when riders have clocked out for the day – its as though your boss followed you home after quitting time and spied on you.
The research also revealed evidence of a secretive worker scoring system that ranked workers based on undisclosed criteria and reserved the best jobs for workers with high scores. This kind of thing is pervasive in algorithmic management, from gig work to Youtube and Tiktok, where performers' videos are routinely suppressed because they crossed some undisclosed line. When an app is your boss, your every paycheck is docked because you violated a policy you're not allowed to know about, because if you knew why your boss was giving you shitty jobs, or refusing to show the video you spent thousands of dollars making to the subscribers who asked to see it, then maybe you could figure out how to keep your boss from detecting your rulebreaking next time.
All this data-collection and processing is bad enough, but what makes it all a thousand times worse is Glovo's data retention policy – they're storing this data on their workers for four years after the worker leaves their employ. That means that mountains of sensitive, potentially ruinous data on gig workers is just lying around, waiting to be stolen by the next hacker that breaks into the company's servers.
Reversing Works's report made quite a splash. A year after its publication, the Italian data protection agency fined Glovo another 5 million euros and ordered them to cut this shit out:
https://reversing.works/posts/2024/11/press-release-reversing.works-investigation-exposes-glovos-data-privacy-violations-marking-a-milestone-for-worker-rights-and-technology-accountability/
As the report points out, Italy is extremely well set up to defend workers' rights from this kind of bossware abuse. Not only do Italian enforcers have all the privacy tools created by the GDPR, the EU's flagship privacy regulation – they also have the benefit of Italy's 1970 Workers' Statute. The Workers Statute is a visionary piece of legislation that protects workers from automated management practices. Combined with later privacy regulation, it gave Italy's data regulators sweeping powers to defend Italian workers, like Glovo's riders.
Italy is also a leader in recognizing gig workers as de facto employees, despite the tissue-thin pretense that adding an app to your employment means that you aren't entitled to any labor protections. In the case of Glovo, the fine-grained surveillance and reputation scoring were deemed proof that Glovo was employer to its riders.
Reversing Works' report is a fascinating read, especially the sections detailing how the researchers recruited a Glovo rider who allowed them to log in to Glovo's platform on their account.
As Reversing Works points out, this bottom-up approach – where apps are subjected to technical analysis – has real potential for labor organizations seeking to protect workers. Their report established multiple grounds on which a union could seek to hold an abusive employer to account.
But this bottom-up approach also holds out the potential for developing direct-action tools that let workers flex their power, by modifying apps, or coordinating their actions to wring concessions out of their bosses.
After all, the whole reason for the gig economy is to slash wage-bills, by transforming workers into contractors, and by eliminating managers in favor of algorithms. This leaves companies extremely vulnerable, because when workers come together to exercise power, their employer can't rely on middle managers to pressure workers, deal with irate customers, or step in to fill the gap themselves:
https://projects.itforchange.net/state-of-big-tech/changing-dynamics-of-labor-and-capital/
Only by seizing the means of computation, workers and organized labor can turn the tables on bossware – both by directly altering the conditions of their employment, and by producing the evidence and tools that regulators can use to force employers to make those alterations permanent.
Image: EFF (modified) https://www.eff.org/files/issues/eu-flag-11_1.png
CC BY 3.0 http://creativecommons.org/licenses/by/3.0/us/
#pluralistic#etui#glovo#foodinho#alogrithms exposed#reverse engineering#platform work directive#eu#data protection#algorithmic management#gdpr#privacy#labor#union busting#tracking exposed#reversing works#adversarial interoperability#comcom#bossware
352 notes
·
View notes
Text
The Winner Takes it All: Anakin Skywalker x Reader (Enemies-to-Lovers Modern AU)
NSFW! Minors DNI!!! Summary: The moment the thesis competition was announced, you knew your biggest threat. Anakin Skywalker, golden boy of the engineering department. He's the only other person smart enough to beat you, and the only other person insane enough to stay in the lab until midnight every night. He's also an asshole, but you're starting to think maybe he's not as bad as you thought he was... Pairing: Anakin Skywalker x Fem!Reader CW: mentions of masturbation WC: 3.4k AN: hello darlings!! another anakin x reader longer fic coming your way!! lmk what you think, and asks/requests are always open!
[Ch. 1], Ch. 2, Ch. 3, Ch. 4, Ch. 5, Ch. 6
Chapter 1: Soldering
The moment the competition was announced, you knew your biggest threat. Anakin Skywalker, golden boy of the department. As soon as he heard about it at the thesis info session of your senior year, his eyes found you in the crowd, because he knew you're his biggest rival, and you're coming for him. He was surprised to find you were looking at him, based on the way his eyes widened, and you found a shocking amount of satisfaction in it. The top prize was 10k and a job at Boeing, after all. The more you surprised him, the more likely you were to catch him off-guard. Not that you would sabotage his work, that was just unseemly conduct for a senior at Coruscant U, but you'd encourage his sloppiness.
The instant after the presentation finished, you rushed to the lab. The thesis lab adjoined the regular makerspace in a continuation of the glass walls and sleek design of the rest of the engineering building. You'd spent the end of your junior year there, when you'd had to submit your thesis proposal (A Novel Method for Glaucoma Detection Utilizing Machine Learning and Mass-Producible Hardware). Anakin was always there too, which made the space just a little more annoying, with the loud music blasting out of his headphones and the hair-raising racket of the band saw.
Last year, you'd decided to admit to yourself, despite your best efforts since you had met him, that okay, Anakin Skywalker was hot. Like, horrendously hot. He was a looker no matter what he did, with those blue puppy dog eyes, full lips, and his gorgeous chestnut hair, which looked so soft that you had wondered on multiple occasions what it would be like to touch it. And, being captain of the university taekwondo team, he was muscular as all get-out. You'd catch a peek at his calves and ass on hot days when he wore shorts, and his biceps and shoulders were almost always flexed in the lab when he was sawing something or bent over the soldering station. One time, he wore grey sweatpants, and you had to literally tear your eyes away. But it wasn't just those features that made him hot. It was, unfortunately, him as a person. The confidence with which he sauntered through the building. His mischievous smile that he'd cast you in group projects, or the clench of his jaw as he wired something finicky. Your roommate, Ahsoka, a junior and also his vice-captain, told you that, oh yeah, he was also really good with younger team members. That he taught kids in the nearby school once a week, too, even though he had such a busy schedule. Wasn't that just sweet.
He wasn't that kind to you. Another thing that made him hot, unfortunately, was his brain, and his wit. He was kind of smart, okay, very smart, and that might make him the one thing standing in your way this year. Anakin also never shied away from a biting comment at you, usually about how if you had done it correctly, you wouldn't have an issue with some wiring. Unfortunately, he was usually right, but you wouldn't give him the satisfaction of telling him that.
Your rivalry started in freshman year, when your physics professor would choose the best student's homework and post it to the class as an example. You were sure you'd be chosen--your first homework was perfect--but then you saw his name. Anakin Skywalker. The next week, you beat him, but then he came out on top immediately after. And so it went. Always fighting for the top spot, to see who could outdo the other. Now, the department was just paying you to do it.
You were in the lab right after the "Senior Thesis Information Session" presentation, using the few minutes you had before your thermodynamics class to tinker with the 3D print that had just finished. Then, the door slid open with the beep of an ID card. You didn't have to turn around to know it was Anakin. Only he would be insane enough to work on day 1 of the semester. Him, and you.
"So you're seriously competing for this, huh?" He asked, watching you sand off some rough edges off the plastic. His tone was playful, but there was an undercurrent of seriousness. He was sizing up the competition.
"Yup. And I'm gonna blow you out of the water," you said self-assuredly. Your project was too good not to win. Anakin barked out a laugh.
"Sure. Right. We'll see about that," he remarked. His voice was dripping with smugness, just like usual with you. You just rolled your eyes. It wasn't worth it to waste time verbally sparring with him, you had better things to do. Like thermo. So you pushed out of your chair, leaving the print on the shelf that had your name laser cut into wood (a gift you had made yourself after your junior thesis proposal got an A), and heading to Lecture Hall 3.56B. Anakin was, of course, heading there too. You were in lockstep, as always. However, he refused to walk there with you, so he waited precisely enough for you to close the door before he left too.
And so, the first three months of the semester passed in relative peace between the two of you. There was only a handful of people who used the thesis room, and you were the only ones there consistently. It helped because safety regulations meant you had to have a buddy in the room to use any of the really useful machines, so you sometimes found yourself pleased to see him. It meant you could get work done. At night, the engineering building was fifteen minutes away from the dorms where you both lived--in the same building, which vexed you to no end when you saw him in the dining hall--so you both had to make the walk home late at night through the city. Oftentimes, you ended up walking home at the same time. It would be wrong to call it walking together, because that would imply you were near each other, or in each other's company, which would be plain wrong. You were always as far as possible on the sidewalk, and oftentimes you two would end up speedwalking home, not allowing the other person to be faster. Was it childish? Maybe. Did you feel a rush of joy every single time you hit the door to your building before him? Definitely.
In November, as the biting cold chilled the air, you found yourself done before him. All your current tasks were done, and you had to wait for a print to finish before you could keep going, plus he wasn't using any machines that needed a buddy, according to lab rules. It had been a long day, and you'd barely dragged your bones into the lab, let alone through all that work.
"Hang on," his voice called from across the space. He was at the soldering station in his safety glasses, bent over some chip.
"What?" Why couldn't you just go home? To your beautiful bed?
"I don't feel good about you walking home alone, so can you just wait for, like, three more seconds?" He wasn't even looking at you as he said it, instead he was pressing the soldering iron to some metal. You scoffed. Like you were so frail you couldn't walk fifteen minutes on your own.
"Are you serious? Do you think I'm vulnerable because, what, I have a vagina? I've taken self-defense classes, thank you very much." Your tone was poisonous, and you tried to infuse every drop of venom you had in you at his stupid idea. Anakin finally looked up from the bench, turning the iron off and cleaning it in the steel wool, catching your eyes with an angry glare.
"No, dumbass. You're just less likely to get robbed in this part of town if you're not alone. But do what you want, I guess. Have fun getting all your valuables taken!" He shrugged sardonically and turned off the vent fan above him. Anakin was right, it killed you to admit. You didn't exactly feel safe walking home at 3am through this part of town. There were enough reports of students getting hurt. So you planted yourself in your chair and waited. When he saw you, a smug smile grew on his face. Asshole.
"C'mon, let's go home," he said nonchalantly once he'd shut down and locked the woodworking room and the laser cutters. As you walked home, this time at a comfortable pace and with his headphones off, you realized it was almost nice, peaceful to be with him like this. The night was still, not a single thing moving in the dark of the night. You passed the corner store, its graffiti-covered grate down at night, then the Vietnamese restaurant you loved, dark and empty. There was no one on the planet but the two of you at that moment. Much to your chagrin, you didn't mind it at that moment. Anakin looked even more ethereal in the moonlight, lighting up the light parts of his hair a silvery white and casting shadows all over his face. He really was handsome, you admitted reluctantly. When you got home, he wished you a good night, which he had never gone. You found the word escaping your lips out of habit. After that, your walking home at the same time turned into walking home together. On November the 8th, he asked you how you were doing. You told him you were good, your tone clipped. He echoed good into the quiet street, then you lapsed into silence. On the 10th, he asked if Ahsoka was feeling better. She had sprained her ankle at practice the previous day. You told him she was, and he said good again. On the 11th, he asked how your project was going, and, in a fit of weakness, you told him it wasn't great. That you were nervous about your first real test of the finished product, the one that would tell you if the past three months had been wasted or not. He told you that if anyone could do it, it would be you, and you spend the rest of the walk wondering where the insult buried inside the statement was hiding. Later that night, once you had tucked into bed, you realized there wasn't any insult at all, just genuine encouragement. For the next week, your walks were filled with slightly guarded conversation, sometimes about upcoming homework assignments, but sometimes about how the taekwondo team was doing, or if you thought Professor Yoda's ear hairs were a countable or uncountable infinity. But he was still an asshole.
About a week later, you were alone with Anakin in the lab around midnight, working on a piece of the lens, trying to get the refraction just right before the test run, when your phone buzzed. Midterm Grade Posted for PHYS 485: Thermodynamics. Your heart stopped. You had been hoping and praying that the number of hours you'd poured into your thesis wouldn't come back to bite you in terms of classwork, but now was the moment of truth. You opened the notification, then to the Canvas page, where you saw your grade. 38/100. Everything in the world stopped. How could you have fucked up that badly? Your eyes scanned over instructor comments. Average class grade: 40/100. Maximum grade: 49/100. Okay, okay. It would be curved up, and you'd probably get a B, but you were below average for the first time in your life. Fuck. Fuck. How could this happen? You glared at Anakin, who was screwing in a bolt to the metal scaffolding of his project. That motherfucker was probably the one who got 49. The thought made you so angry you bolted out of your chair and went to go grab the materials for your test. That motherfucker got everything. It wasn't fair.
You lined up the small device you made, plugged it into the port of your phone, and opened the corresponding software. Through the external lens, you scanned the two printed-out pictures of eyes, one with glaucoma and one without. You held your breath throughout the loading screen. Please, just let one thing go right. Please. Please. The little loading circle stopped. Both eyes were cleared of glaucoma. A false negative. Motherfucker. Three months of work, and for what? You'd never get the prize at this rate. You'd have to start from scratch. You slammed your fist onto the table in anger.
"Hey, there's hammers for that," Anakin called, teasing from the other side of the room. He looked up at you, mouth open to snark something else out, when he saw your eyes welling with tears.
"Woah, are you okay? What's wrong? Did you hurt yourself?" His voice was soft, warm. Anakin dropped the wrench he was holding on the table and half-jogged over to you, putting his hand on your shoulder. You jumped at the contact, but it wasn't entirely unwelcome. It was kind of comforting, actually, but you were too upset to notice that.
"It's just, it's not working, and I've spent so much time and--" you trailed off.
"Don't cry, it's okay, we can fix it," he said with a shrug and a smile. Why was he smiling? God, was he actually pleased right now? Suddenly, your tears turned to anger, not at yourself or the system or the difficulty of your project, but at him.
"Like you're not happy about this. I bet you sabotaged it yourself," you spat out and shrugged his hand off your shoulder. He balked.
"Sabotage? Are you serious? I'd never do that." You stood up, incensed, and pointed a finger into his chest.
"Really? It sounds exactly like something you would do--remember in sophomore year when Barriss's robot mysteriously stopped working?" He half laughed, half scoffed, mouth dropping open, then snapped back with his voice raised.
"You've got to be kidding! Maybe if you paid two seconds of attention to your classmates or anyone around you, you'd know it was her wiring! The connections were bad!"
"Sure," your voice dripped with sarcasm as you scoffed at his insult, "And when you told her it served her right? You were so smug!" Your voice was rising. He ran a hand through his hair and bit out another laugh as he retorted.
"And if I was? Like you're not the queen of being smug in this department. 'Oh, my robot's better, Anakin. I got an A, Anakin.'" He raised his voice high, mocking you. His eyes were wild, furious.
"Me? Smug? Look in the mirror, asshole! Pretend all you want, but I know who you are. You can pretend to be oh-so-nice to everyone else, but I see you for what you really are. Just. A. Fucking. Asshole." You emphasized each word with a jab of your finger, getting closer to him each time. The tension between you was turning somehow--were you losing the argument? You couldn't tell.
"Oh yeah? You don't know a single thing about me," he gritted out, right up in your face, jaw flexing. His intense eyes bored into yours, flicking back and forth, and then they dropped down to glance at your lips.
You weren't sure which one of you moved first, but all you felt was his lips against yours and your hands fisting in his hair, which it turned out was as perfectly soft as you had imagined. Bastard. Anakin's kisses were hot, insistent against your mouth as you sloppily made out in the middle of the lab. His arms, warm and firm, circled your waist and pulled you to him while you tilted your heads this way and that to get closer. Your tongue swiped his lower lip, and he treated you to a surprised, low moan that you wanted to hear again and again until your ears bled. He got your hint, though, and started teasing your lips with his tongue until you opened your mouth just enough to touch your tongue to his. His arms tightened and pulled you against him so that you could feel his warmth from chest to thigh. The two of you were frantic, like if you got close enough, deep enough in each others' mouths, you'd figure out why you were doing this and why it felt so goddamn good. Your heart was pounding when his hands slipped lower and grabbed you under your ass.
"Jump," he whispered huskily after he reluctantly separated his mouth from yours. You hopped, and he used the hands under your thighs to lift you up and sit you on the lab table. Dutifully, you wrapped your legs around his hips, interlocking your ankles around his unfairly attractive ass, and kept your hands buried in his hair. Anakin was back on your lips immediately. He was sloppy and excited until you shifted your hips against him, and then he became electric against you, even hungrier than before. You were definitely feeling something underneath your hips, a lump. It hit you that he was hard, and that sent a bolt of lightning between your legs. You'd stared a little bit more than you cared to admit that time he'd worn gray sweatpants, and what you'd seen was now pressed against you. You drew in a shaky breath at that idea, and you realized that God, he smelled like metal from his soldering earlier and, underneath that, sandalwood and vanilla.
Sometime around the time his hips tilted forward into yours, a beep echoed through the empty lab. You both jumped apart, leaving you sitting on the table, and the noise continued. Beep beep beep. The insistent noise came from one of the 3D printers in the corner. Anakin's print was done.
The silence of the lab felt deafening as you both panted. What had you done? Making out with your enemy was completely against lab safety guidelines, for one, and your morals, for another. Your heart was still pounding in your chest, despite your misgivings, but you willed those wisps of excitement deep down into some mental box. This couldn't happen. If there was a single person on this campus you couldn't fuck, it was Anakin. Not only was he rude, but if you got too close, how would you navigate it when only one of you won? Most importantly, though, you had hated him for four years. And for good reason. (Though you couldn't remember exactly what it was, or think critically at all, in that moment.)
"We shouldn't do that again, Anakin." Your voice was small in the empty space. For a second, his face fell, but he pressed his lips into a thin line to disguise it.
"Definitely not. I--Sorry." And that was that.
You walked home in complete silence, stealing glances at one another in the dark night. When you got to the door of your dorm, you opened your mouth to say something, but then closed it. Better not. So why, once you separated, did you feel so sad? Why did you want to see him again, to feel that silky hair under your fingers in your bed? You laid awake until the early hours of the night, and told yourself that your fingers slipping inside the waistband of your pajamas wasn't about Anakin, you just hadn't gotten some in way too long. It wasn't about Anakin. Even though it was his mouth and chest and arms you thought about when you came on your fingers, it wasn't about him.
♡♡♡♡♡♡♡♡♡♡♡♡♡♡♡♡♡♡♡
please let me know if you'd like to be added to the tag list!
#anakin skywalker#star wars anakin#anakin x reader#anakin smut#anakin x you#anakin skywalker x reader#anakin skywalker/you#anakin/you#anakin skywalker smut#anakin skywalker fanfiction#anakin skywalker imagine#anakin skywalker x you#star wars prequels#hayden christensen x reader#hayden christensen imagine
517 notes
·
View notes
Note
I saw something about generative AI on JSTOR. Can you confirm whether you really are implementing it and explain why? I’m pretty sure most of your userbase hates AI.
A generative AI/machine learning research tool on JSTOR is currently in beta, meaning that it's not fully integrated into the platform. This is an opportunity to determine how this technology may be helpful in parsing through dense academic texts to make them more accessible and gauge their relevancy.
To JSTOR, this is primarily a learning experience. We're looking at how beta users are engaging with the tool and the results that the tool is producing to get a sense of its place in academia.
In order to understand what we're doing a bit more, it may help to take a look at what the tool actually does. From a recent blog post:
Content evaluation
Problem: Traditionally, researchers rely on metadata, abstracts, and the first few pages of an article to evaluate its relevance to their work. In humanities and social sciences scholarship, which makes up the majority of JSTOR’s content, many items lack abstracts, meaning scholars in these areas (who in turn are our core cohort of users) have one less option for efficient evaluation.
When using a traditional keyword search in a scholarly database, a query might return thousands of articles that a user needs significant time and considerable skill to wade through, simply to ascertain which might in fact be relevant to what they’re looking for, before beginning their search in earnest.
Solution: We’ve introduced two capabilities to help make evaluation more efficient, with the aim of opening the researcher’s time for deeper reading and analysis:
Summarize, which appears in the tool interface as “What is this text about,” provides users with concise descriptions of key document points. On the back-end, we’ve optimized the Large Language Model (LLM) prompt for a concise but thorough response, taking on the task of prompt engineering for the user by providing advanced direction to:
Extract the background, purpose, and motivations of the text provided.
Capture the intent of the author without drawing conclusions.
Limit the response to a short paragraph to provide the most important ideas presented in the text.
Search term context is automatically generated as soon as a user opens a text from search results, and provides information on how that text relates to the search terms the user has used. Whereas the summary allows the user to quickly assess what the item is about, this feature takes evaluation to the next level by automatically telling the user how the item is related to their search query, streamlining the evaluation process.
Discovering new paths for exploration
Problem: Once a researcher has discovered content of value to their work, it’s not always easy to know where to go from there. While JSTOR provides some resources, including a “Cited by” list as well as related texts and images, these pathways are limited in scope and not available for all texts. Especially for novice researchers, or those just getting started on a new project or exploring a novel area of literature, it can be needlessly difficult and frustrating to gain traction.
Solution: Two capabilities make further exploration less cumbersome, paving a smoother path for researchers to follow a line of inquiry:
Recommended topics are designed to assist users, particularly those who may be less familiar with certain concepts, by helping them identify additional search terms or refine and narrow their existing searches. This feature generates a list of up to 10 potential related search queries based on the document’s content. Researchers can simply click to run these searches.
Related content empowers users in two significant ways. First, it aids in quickly assessing the relevance of the current item by presenting a list of up to 10 conceptually similar items on JSTOR. This allows users to gauge the document’s helpfulness based on its relation to other relevant content. Second, this feature provides a pathway to more content, especially materials that may not have surfaced in the initial search. By generating a list of related items, complete with metadata and direct links, users can extend their research journey, uncovering additional sources that align with their interests and questions.
Supporting comprehension
Problem: You think you have found something that could be helpful for your work. It’s time to settle in and read the full document… working through the details, making sure they make sense, figuring out how they fit into your thesis, etc. This all takes time and can be tedious, especially when working through many items.
Solution: To help ensure that users find high quality items, the tool incorporates a conversational element that allows users to query specific points of interest. This functionality, reminiscent of CTRL+F but for concepts, offers a quicker alternative to reading through lengthy documents.
By asking questions that can be answered by the text, users receive responses only if the information is present. The conversational interface adds an accessibility layer as well, making the tool more user-friendly and tailored to the diverse needs of the JSTOR user community.
Credibility and source transparency
We knew that, for an AI-powered tool to truly address user problems, it would need to be held to extremely high standards of credibility and transparency. On the credibility side, JSTOR’s AI tool uses only the content of the item being viewed to generate answers to questions, effectively reducing hallucinations and misinformation.
On the transparency front, responses include inline references that highlight the specific snippet of text used, along with a link to the source page. This makes it clear to the user where the response came from (and that it is a credible source) and also helps them find the most relevant parts of the text.
293 notes
·
View notes
Text
Neural Filters Tutorial for Gifmakers by @antoniosvivaldi
Hi everyone! In light of my blog’s 10th birthday, I’m delighted to reveal my highly anticipated gifmaking tutorial using Neural Filters - a very powerful collection of filters that really broadened my scope in gifmaking over the past 12 months.
Before I get into this tutorial, I want to thank @laurabenanti, @maines , @cobbbvanth, and @cal-kestis for their unconditional support over the course of my journey of investigating the Neural Filters & their valuable inputs on the rendering performance!
In this tutorial, I will outline what the Photoshop Neural Filters do and how I use them in my workflow - multiple examples will be provided for better clarity. Finally, I will talk about some known performance issues with the filters & some feasible workarounds.
Tutorial Structure:
Meet the Neural Filters: What they are and what they do
Why I use Neural Filters? How I use Neural Filters in my giffing workflow
Getting started: The giffing workflow in a nutshell and installing the Neural Filters
Applying Neural Filters onto your gif: Making use of the Neural Filters settings; with multiple examples
Testing your system: recommended if you’re using Neural Filters for the first time
Rendering performance: Common Neural Filters performance issues & workarounds
For quick reference, here are the examples that I will show in this tutorial:
Example 1: Image Enhancement | improving the image quality of gifs prepared from highly compressed video files
Example 2: Facial Enhancement | enhancing an individual's facial features
Example 3: Colour Manipulation | colourising B&W gifs for a colourful gifset
Example 4: Artistic effects | transforming landscapes & adding artistic effects onto your gifs
Example 5: Putting it all together | my usual giffing workflow using Neural Filters
What you need & need to know:
Software: Photoshop 2021 or later (recommended: 2023 or later)*
Hardware: 8GB of RAM; having a supported GPU is highly recommended*
Difficulty: Advanced (requires a lot of patience); knowledge in gifmaking and using video timeline assumed
Key concepts: Smart Layer / Smart Filters
Benchmarking your system: Neural Filters test files**
Supplementary materials: Tutorial Resources / Detailed findings on rendering gifs with Neural Filters + known issues***
*I primarily gif on an M2 Max MacBook Pro that's running Photoshop 2024, but I also have experiences gifmaking on few other Mac models from 2012 ~ 2023.
**Using Neural Filters can be resource intensive, so it’s helpful to run the test files yourself. I’ll outline some known performance issues with Neural Filters and workarounds later in the tutorial.
***This supplementary page contains additional Neural Filters benchmark tests and instructions, as well as more information on the rendering performance (for Apple Silicon-based devices) when subject to heavy Neural Filters gifmaking workflows
Tutorial under the cut. Like / Reblog this post if you find this tutorial helpful. Linking this post as an inspo link will also be greatly appreciated!
1. Meet the Neural Filters!
Neural Filters are powered by Adobe's machine learning engine known as Adobe Sensei. It is a non-destructive method to help streamline workflows that would've been difficult and/or tedious to do manually.
Here are the Neural Filters available in Photoshop 2024:
Skin Smoothing: Removes blemishes on the skin
Smart Portrait: This a cloud-based filter that allows you to change the mood, facial age, hair, etc using the sliders+
Makeup Transfer: Applies the makeup (from a reference image) to the eyes & mouth area of your image
Landscape Mixer: Transforms the landscape of your image (e.g. seasons & time of the day, etc), based on the landscape features of a reference image
Style Transfer: Applies artistic styles e.g. texturings (from a reference image) onto your image
Harmonisation: Applies the colour balance of your image based on the lighting of the background image+
Colour Transfer: Applies the colour scheme (of a reference image) onto your image
Colourise: Adds colours onto a B&W image
Super Zoom: Zoom / crop an image without losing resolution+
Depth Blur: Blurs the background of the image
JPEG Artefacts Removal: Removes artefacts caused by JPEG compression
Photo Restoration: Enhances image quality & facial details
+These three filters aren't used in my giffing workflow. The cloud-based nature of Smart Portrait leads to disjointed looking frames. For Harmonisation, applying this on a gif causes Neural Filter timeout error. Finally, Super Zoom does not currently support output as a Smart Filter
If you're running Photoshop 2021 or earlier version of Photoshop 2022, you will see a smaller selection of Neural Filters:
Things to be aware of:
You can apply up to six Neural Filters at the same time
Filters where you can use your own reference images: Makeup Transfer (portraits only), Landscape Mixer, Style Transfer (not available in Photoshop 2021), and Colour Transfer
Later iterations of Photoshop 2023 & newer: The first three default presets for Landscape Mixer and Colour Transfer are currently broken.
2. Why I use Neural Filters?
Here are my four main Neural Filters use cases in my gifmaking process. In each use case I'll list out the filters that I use:
Enhancing Image Quality:
Common wisdom is to find the highest quality video to gif from for a media release & avoid YouTube whenever possible. However for smaller / niche media (e.g. new & upcoming musical artists), prepping gifs from highly compressed YouTube videos is inevitable.
So how do I get around with this? I have found Neural Filters pretty handy when it comes to both correcting issues from video compression & enhancing details in gifs prepared from these highly compressed video files.
Filters used: JPEG Artefacts Removal / Photo Restoration
Facial Enhancement:
When I prepare gifs from highly compressed videos, something I like to do is to enhance the facial features. This is again useful when I make gifsets from compressed videos & want to fill up my final panel with a close-up shot.
Filters used: Skin Smoothing / Makeup Transfer / Photo Restoration (Facial Enhancement slider)
Colour Manipulation:
Neural Filters is a powerful way to do advanced colour manipulation - whether I want to quickly transform the colour scheme of a gif or transform a B&W clip into something colourful.
Filters used: Colourise / Colour Transfer
Artistic Effects:
This is one of my favourite things to do with Neural Filters! I enjoy using the filters to create artistic effects by feeding textures that I've downloaded as reference images. I also enjoy using these filters to transform the overall the atmosphere of my composite gifs. The gifsets where I've leveraged Neural Filters for artistic effects could be found under this tag on usergif.
Filters used: Landscape Mixer / Style Transfer / Depth Blur
How I use Neural Filters over different stages of my gifmaking workflow:
I want to outline how I use different Neural Filters throughout my gifmaking process. This can be roughly divided into two stages:
Stage I: Enhancement and/or Colourising | Takes place early in my gifmaking process. I process a large amount of component gifs by applying Neural Filters for enhancement purposes and adding some base colourings.++
Stage II: Artistic Effects & more Colour Manipulation | Takes place when I'm assembling my component gifs in the big PSD / PSB composition file that will be my final gif panel.
I will walk through this in more detail later in the tutorial.
++I personally like to keep the size of the component gifs in their original resolution (a mixture of 1080p & 4K), to get best possible results from the Neural Filters and have more flexibility later on in my workflow. I resize & sharpen these gifs after they're placed into my final PSD composition files in Tumblr dimensions.
3. Getting started
The essence is to output Neural Filters as a Smart Filter on the smart object when working with the Video Timeline interface. Your workflow will contain the following steps:
Prepare your gif
In the frame animation interface, set the frame delay to 0.03s and convert your gif to the Video Timeline
In the Video Timeline interface, go to Filter > Neural Filters and output to a Smart Filter
Flatten or render your gif (either approach is fine). To flatten your gif, play the "flatten" action from the gif prep action pack. To render your gif as a .mov file, go to File > Export > Render Video & use the following settings.
Setting up:
o.) To get started, prepare your gifs the usual way - whether you screencap or clip videos. You should see your prepared gif in the frame animation interface as follows:
Note: As mentioned earlier, I keep the gifs in their original resolution right now because working with a larger dimension document allows more flexibility later on in my workflow. I have also found that I get higher quality results working with more pixels. I eventually do my final sharpening & resizing when I fit all of my component gifs to a main PSD composition file (that's of Tumblr dimension).
i.) To use Smart Filters, convert your gif to a Smart Video Layer.
As an aside, I like to work with everything in 0.03s until I finish everything (then correct the frame delay to 0.05s when I upload my panels onto Tumblr).
For convenience, I use my own action pack to first set the frame delay to 0.03s (highlighted in yellow) and then convert to timeline (highlighted in red) to access the Video Timeline interface. To play an action, press the play button highlighted in green.
Once you've converted this gif to a Smart Video Layer, you'll see the Video Timeline interface as follows:
ii.) Select your gif (now as a Smart Layer) and go to Filter > Neural Filters
Installing Neural Filters:
Install the individual Neural Filters that you want to use. If the filter isn't installed, it will show a cloud symbol (highlighted in yellow). If the filter is already installed, it will show a toggle button (highlighted in green)
When you toggle this button, the Neural Filters preview window will look like this (where the toggle button next to the filter that you use turns blue)
4. Using Neural Filters
Once you have installed the Neural Filters that you want to use in your gif, you can toggle on a filter and play around with the sliders until you're satisfied. Here I'll walkthrough multiple concrete examples of how I use Neural Filters in my giffing process.
Example 1: Image enhancement | sample gifset
This is my typical Stage I Neural Filters gifmaking workflow. When giffing older or more niche media releases, my main concern is the video compression that leads to a lot of artefacts in the screencapped / video clipped gifs.
To fix the artefacts from compression, I go to Filter > Neural Filters, and toggle JPEG Artefacts Removal filter. Then I choose the strength of the filter (boxed in green), output this as a Smart Filter (boxed in yellow), and press OK (boxed in red).
Note: The filter has to be fully processed before you could press the OK button!

After applying the Neural Filters, you'll see "Neural Filters" under the Smart Filters property of the smart layer
Flatten / render your gif
Example 2: Facial enhancement | sample gifset
This is my routine use case during my Stage I Neural Filters gifmaking workflow. For musical artists (e.g. Maisie Peters), YouTube is often the only place where I'm able to find some videos to prepare gifs from. However even the highest resolution video available on YouTube is highly compressed.
Go to Filter > Neural Filters and toggle on Photo Restoration. If Photoshop recognises faces in the image, there will be a "Facial Enhancement" slider under the filter settings.
Play around with the Photo Enhancement & Facial Enhancement sliders. You can also expand the "Adjustment" menu make additional adjustments e.g. remove noises and reducing different types of artefacts.
Once you're happy with the results, press OK and then flatten / render your gif.
Example 3: Colour Manipulation | sample gifset
Want to make a colourful gifset but the source video is in B&W? This is where Colourise from Neural Filters comes in handy! This same colourising approach is also very helpful for colouring poor-lit scenes as detailed in this tutorial.
Here's a B&W gif that we want to colourise:
Highly recommended: add some adjustment layers onto the B&W gif to improve the contrast & depth. This will give you higher quality results when you colourise your gif.
Go to Filter > Neural Filters and toggle on Colourise.
Make sure "Auto colour image" is enabled.
Play around with further adjustments e.g. colour balance, until you're satisfied then press OK.
Important: When you colourise a gif, you need to double check that the resulting skin tone is accurate to real life. I personally go to Google Images and search up photoshoots of the individual / character that I'm giffing for quick reference.
Add additional adjustment layers until you're happy with the colouring of the skin tone.
Once you're happy with the additional adjustments, flatten / render your gif. And voila!
Note: For Colour Manipulation, I use Colourise in my Stage I workflow and Colour Transfer in my Stage II workflow to do other types of colour manipulations (e.g. transforming the colour scheme of the component gifs)
Example 4: Artistic Effects | sample gifset
This is where I use Neural Filters for the bulk of my Stage II workflow: the most enjoyable stage in my editing process!
Normally I would be working with my big composition files with multiple component gifs inside it. To begin the fun, drag a component gif (in PSD file) to the main PSD composition file.
Resize this gif in the composition file until you're happy with the placement
Duplicate this gif. Sharpen the bottom layer (highlighted in yellow), and then select the top layer (highlighted in green) & go to Filter > Neural Filters
I like to use Style Transfer and Landscape Mixer to create artistic effects from Neural Filters. In this particular example, I've chosen Landscape Mixer
Select a preset or feed a custom image to the filter (here I chose a texture that I've on my computer)
Play around with the different sliders e.g. time of the day / seasons
Important: uncheck "Harmonise Subject" & "Preserve Subject" - these two settings are known to cause performance issues when you render a multiframe smart object (e.g. for a gif)
Once you're happy with the artistic effect, press OK
To ensure you preserve the actual subject you want to gif (bc Preserve Subject is unchecked), add a layer mask onto the top layer (with Neural Filters) and mask out the facial region. You might need to play around with the Layer Mask Position keyframes or Rotoscope your subject in the process.
After you're happy with the masking, flatten / render this composition file and voila!
Example 5: Putting it all together | sample gifset
Let's recap on the Neural Filters gifmaking workflow and where Stage I and Stage II fit in my gifmaking process:
i. Preparing & enhancing the component gifs
Prepare all component gifs and convert them to smart layers
Stage I: Add base colourings & apply Photo Restoration / JPEG Artefacts Removal to enhance the gif's image quality
Flatten all of these component gifs and convert them back to Smart Video Layers (this process can take a lot of time)
Some of these enhanced gifs will be Rotoscoped so this is done before adding the gifs to the big PSD composition file
ii. Setting up the big PSD composition file
Make a separate PSD composition file (Ctrl / Cmmd + N) that's of Tumblr dimension (e.g. 540px in width)
Drag all of the component gifs used into this PSD composition file
Enable Video Timeline and trim the work area
In the composition file, resize / move the component gifs until you're happy with the placement & sharpen these gifs if you haven't already done so
Duplicate the layers that you want to use Neural Filters on
iii. Working with Neural Filters in the PSD composition file
Stage II: Neural Filters to create artistic effects / more colour manipulations!
Mask the smart layers with Neural Filters to both preserve the subject and avoid colouring issues from the filters
Flatten / render the PSD composition file: the more component gifs in your composition file, the longer the exporting will take. (I prefer to render the composition file into a .mov clip to prevent overriding a file that I've spent effort putting together.)
Note: In some of my layout gifsets (where I've heavily used Neural Filters in Stage II), the rendering time for the panel took more than 20 minutes. This is one of the rare instances where I was maxing out my computer's memory.
Useful things to take note of:
Important: If you're using Neural Filters for Colour Manipulation or Artistic Effects, you need to take a lot of care ensuring that the skin tone of nonwhite characters / individuals is accurately coloured
Use the Facial Enhancement slider from Photo Restoration in moderation, if you max out the slider value you risk oversharpening your gif later on in your gifmaking workflow
You will get higher quality results from Neural Filters by working with larger image dimensions: This gives Neural Filters more pixels to work with. You also get better quality results by feeding higher resolution reference images to the Neural Filters.
Makeup Transfer is more stable when the person / character has minimal motion in your gif
You might get unexpected results from Landscape Mixer if you feed a reference image that don't feature a distinctive landscape. This is not always a bad thing: for instance, I have used this texture as a reference image for Landscape Mixer, to create the shimmery effects as seen in this gifset
5. Testing your system
If this is the first time you're applying Neural Filters directly onto a gif, it will be helpful to test out your system yourself. This will help:
Gauge the expected rendering time that you'll need to wait for your gif to export, given specific Neural Filters that you've used
Identify potential performance issues when you render the gif: this is important and will determine whether you will need to fully playback your gif before flattening / rendering the file.
Understand how your system's resources are being utilised: Inputs from Windows PC users & Mac users alike are welcome!
About the Neural Filters test files:
Contains six distinct files, each using different Neural Filters
Two sizes of test files: one copy in full HD (1080p) and another copy downsized to 540px
One folder containing the flattened / rendered test files
How to use the Neural Filters test files:
What you need:
Photoshop 2022 or newer (recommended: 2023 or later)
Install the following Neural Filters: Landscape Mixer / Style Transfer / Colour Transfer / Colourise / Photo Restoration / Depth Blur
Recommended for some Apple Silicon-based MacBook Pro models: Enable High Power Mode
How to use the test files:
For optimal performance, close all background apps
Open a test file
Flatten the test file into frames (load this action pack & play the “flatten” action)
Take note of the time it takes until you’re directed to the frame animation interface
Compare the rendered frames to the expected results in this folder: check that all of the frames look the same. If they don't, you will need to fully playback the test file in full before flattening the file.†
Re-run the test file without the Neural Filters and take note of how long it takes before you're directed to the frame animation interface
Recommended: Take note of how your system is utilised during the rendering process (more info here for MacOS users)
†This is a performance issue known as flickering that I will discuss in the next section. If you come across this, you'll have to playback a gif where you've used Neural Filters (on the video timeline) in full, prior to flattening / rendering it.
Factors that could affect the rendering performance / time (more info):
The number of frames, dimension, and colour bit depth of your gif
If you use Neural Filters with facial recognition features, the rendering time will be affected by the number of characters / individuals in your gif
Most resource intensive filters (powered by largest machine learning models): Landscape Mixer / Photo Restoration (with Facial Enhancement) / and JPEG Artefacts Removal
Least resource intensive filters (smallest machine learning models): Colour Transfer / Colourise
The number of Neural Filters that you apply at once / The number of component gifs with Neural Filters in your PSD file
Your system: system memory, the GPU, and the architecture of the system's CPU+++
+++ Rendering a gif with Neural Filters demands a lot of system memory & GPU horsepower. Rendering will be faster & more reliable on newer computers, as these systems have CPU & GPU with more modern instruction sets that are geared towards machine learning-based tasks.
Additionally, the unified memory architecture of Apple Silicon M-series chips are found to be quite efficient at processing Neural Filters.
6. Performance issues & workarounds
Common Performance issues:
I will discuss several common issues related to rendering or exporting a multi-frame smart object (e.g. your composite gif) that uses Neural Filters below. This is commonly caused by insufficient system memory and/or the GPU.
Flickering frames: in the flattened / rendered file, Neural Filters aren't applied to some of the frames+-+
Scrambled frames: the frames in the flattened / rendered file isn't in order
Neural Filters exceeded the timeout limit error: this is normally a software related issue
Long export / rendering time: long rendering time is expected in heavy workflows
Laggy Photoshop / system interface: having to wait quite a long time to preview the next frame on the timeline
Issues with Landscape Mixer: Using the filter gives ill-defined defined results (Common in older systems)--
Workarounds:
Workarounds that could reduce unreliable rendering performance & long rendering time:
Close other apps running in the background
Work with smaller colour bit depth (i.e. 8-bit rather than 16-bit)
Downsize your gif before converting to the video timeline-+-
Try to keep the number of frames as low as possible
Avoid stacking multiple Neural Filters at once. Try applying & rendering the filters that you want one by one
Specific workarounds for specific issues:
How to resolve flickering frames: If you come across flickering, you will need to playback your gif on the video timeline in full to find the frames where the filter isn't applied. You will need to select all of the frames to allow Photoshop to reprocess these, before you render your gif.+-+
What to do if you come across Neural Filters timeout error? This is caused by several incompatible Neural Filters e.g. Harmonisation (both the filter itself and as a setting in Landscape Mixer), Scratch Reduction in Photo Restoration, and trying to stack multiple Neural Filters with facial recognition features.
If the timeout error is caused by stacking multiple filters, a feasible workaround is to apply the Neural Filters that you want to use one by one over multiple rendering sessions, rather all of them in one go.
+-+This is a very common issue for Apple Silicon-based Macs. Flickering happens when a gif with Neural Filters is rendered without being previously played back in the timeline.
This issue is likely related to the memory bandwidth & the GPU cores of the chips, because not all Apple Silicon-based Macs exhibit this behaviour (i.e. devices equipped with Max / Ultra M-series chips are mostly unaffected).
-- As mentioned in the supplementary page, Landscape Mixer requires a lot of GPU horsepower to be fully rendered. For older systems (pre-2017 builds), there are no workarounds other than to avoid using this filter.
-+- For smaller dimensions, the size of the machine learning models powering the filters play an outsized role in the rendering time (i.e. marginal reduction in rendering time when downsizing 1080p file to Tumblr dimensions). If you use filters powered by larger models e.g. Landscape Mixer and Photo Restoration, you will need to be very patient when exporting your gif.
7. More useful resources on using Neural Filters
Creating animations with Neural Filters effects | Max Novak
Using Neural Filters to colour correct by @edteachs
I hope this is helpful! If you have any questions or need any help related to the tutorial, feel free to send me an ask 💖
#photoshop tutorial#gif tutorial#dearindies#usernik#useryoshi#usershreyu#userisaiah#userroza#userrobin#userraffa#usercats#userriel#useralien#userjoeys#usertj#alielook#swearphil#*#my resources#my tutorials
538 notes
·
View notes
Text
Anomalous Reaction
ao3
Summary: Nines experiences synthetic dermal tone shift triggered by emotional stimuli (aka blushing) when you call him cute for the first time.
Contents: tooth rotting fluff, RK900 X reader, gender neutral reader, any pronouns, android x human relationship, established relationship Slightly Awkward Android Feelings™, not beta read
a/n: sorry americans, but I don't understand feet and stuff. 180cm is around 6 feet if google isn't lying to me.
This is the first time I’m writing him, hope you enjoy:)
English isn't my first language.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It started with a coffee. Or, more accurately, your coffee and his silent observation of it.
You two were sitting in the breakroom at the station, catching a break from the investigation you were assigned to with the android. It was a pathetic effort to call it that, it was more like a sterile conference pod with a coffee machine and a few tables thrown in as an afterthought. However, it had a window which lighted the otherwise sad little room with the colours of the outside world. Pink, yellow and blue neon colours creeped through the glass, illuminating the side of the RK900’s face. He had strict but delicate features balanced perfectly, thanks to the technology and engineering of CyberLife. His light eyebrows always sat stern above his icy grey eyes, as they observed everyone and everything around him. The comfortable blueness of his LED shone through the meek colors of the jumping ads, matching the fluorescent part of his jacket.
Nines didn’t need breaks, he was after all, designed to be the best, most durable and most efficient android Cyberlife has ever created, but for your sake he always sat down with you while you drank your coffee. He would never admit to it but the minor malfunctions your presence caused in his system were … preferable and positive.
It was such a strange sight seeing his nearly 6 feet tall figure sitting inhumanly stiff and perfect across your warm and gentle frame, while you were drinking with all of your imperfections in your posture.
You blew gently on the steaming cup, holding it with both hands, enjoying the warmth and the hot scent of the liquid that washed over you. Cold, grey eyes watched your every move silently and curious from across the table. You suggested he buy thirium so he could also ‘drink’ something on your break but he refused every time, stating that he didn’t need to drink nor eat. So he remained stoic and unmoving every day, as you two sat down together.
“I don’t understand the appeal,” – he started in his monotone, flat voice – “based on my calculations, caffeine has a bitter taste. However, other research supports the fact that humans prefer foods and liquids which are sweet.”
A tender smile danced around the corner of your lips as you answered him – “That's true. But maybe it's not about the caffeine, but rather about the feeling of warmth and comfort it brings out in people.”
“Comfort can be simulated.” – His strict eyebrows furrowed ever so slightly, analysing your words. You found it so endearing about him. Always researching and fact checking then returning back to you asking questions in the way others found intimidating. You knew that he would never reveal such ‘confidential information’ about himself outright, but you saw him tyng to understand how feelings worked. In his own way, he was making tremendous progress. As time went on, you discovered a soft light behind his icy irises every time you praised and agreed to his findings.
“Sure,” – you replied, sipping. – “But it's different when it's real.”
He was quiet again for some time. His LED blinked between blue and yellow, processing your words carefully, and trying to come up with an optimal response probably. You had learned to wait for however much time he needed to ‘think’ about things he’s just learned. It was a fascinating thing to watch him working through these new challenges, as he didn't change at all. His face, no matter what happened, always wore the exact same tone deaf expression as if he was reading a grocery list. The real chaos was buried deep inside him, between the blue wires of his chest and brain, pulsating, running and buzzing with speed and a quite frustrating anger which demanded answers.
After what felt like 10 minutes, his low voice penetrated the silence of the room. – “Every day you spend time with me… I find it agreeable.”
“You do?”
“Yes.”
You tilted your head, studying him for a moment. The sharp line of his jaw framed his handsome face as he looked at you seriously. You couldn’t help but get lost in the details of his pale features, the curve of his perfect forehead and the line of his nose as it flowed down his thin lips, then travelling back towards his irises. His expression was calm as usual, but his eyes betrayed the slightest flicker of something sweet. They were swimming in the mirrored lights of the neon ads, highlighted from within with a strange light that wasn’t part of his usual appearance.
He was charming in the way the snow filled Alps are. Freezing at first glance but calm and hiding something pleasant. They sing a sort of quietness into the air, which you have to experience to understand, and even then a few hidden mysteries remain. However, instead of being threatening, it’s welcoming towards you.
How could someone so strong, designed for perfection, could look at you like you were beyond logic?
A soft smile sneaked into your lips and eyes, making them twinkle as you continued to look at him.
“You are cute Nines.”
His head jerked up in the tiniest movement, surprised eyes locking into yours as his LED started to run around in yellow. He blinked once, twice , thrice.
“Cute?” — he echoed the word, which felt strange on his own tongue. His emotionless voice was quieter than usual, as if even uttering the word itself could bring danger with it.
“Yeah,” — you said with a shrug, like it was the most obvious thing in the world. — “In a ‘deadpan robot boyfriend who pretends he doesn’t care about human things but definitely does’ kind of way.”
Now he froze. His already motionless structure became, if possible, even more still and stiff. The flicker of yellow at his temple increased, became rapid, almost frantic in its blinking, urging a response. He dropped his gaze to the table, then to the floor , then back at you, as if trying to locate the correct social response and failing miserably. Into the pale canvas of his cheeks, blue thirium started to slowly drip into, dying it the lightest of azure.
“I… do not believe I was designed to be… ‘cute’”. — he said awkwardly and roboticly in a way that was ridiculously charming.
“Maybe not,” — you said, leaning forward just a little with a teasing grin on your lips. — “But you are. Especially when you make that face.”
“What face?” — he shot back immediately, blinking again.
“That one.” — you said pointing. — “The one you are making right now. All confused and blushing.”
Upon your words, the azure deepened, creating a deeper ocean across his handsome face. He looked utterly scandalised like a Victorian man seeing a bare ankle the first time. — “I’m not blushing. My internal temperature regulator is functioning within normal parameters.”
You raised an eyebrow, clearly amused. — “Then what’s with the blue cheeks, big boy?”
He glanced away, clearly at loss. You thought he looked completely and irresistibly beautiful like this. His LED becoming one yellow circle from the constant flickering, his averted grey eyes running softer checks and searching for an answer and of course, his light skin flowing with his cobalt shade. For a moment, the always composed RK900 looked … shy. And of god, if that didn’t make him even cuter.
“I am… experiencing an unfamiliar psychological response.” — he admitted, voice barely above a whisper. — “… possibly triggered by your compliment or your proximity.”
You blinked, confused for a second, decoding his response then laughed softly. — “You really don’t know how to take being called cute, don't you?”
“No,” — he stated plainly. “But… I don’t object to it.”
Your smile softened as you looked at him affectionately. “Good, because I mean it.”
Silence entered the room again, but it wasn’t an awkward one. It was filled with soft glances and unspoken thoughts. You placed one of your hands onto his stretched ones, petting them slowly with your thumb.
After a while, his thrirum levels recovered into their normal measures and his paleness returned. As he looked up, his chilly eyes had a gentle look to them and his LED faded back to a calm blue.
“You are… cute too.” — he said, like it was the bravest and most challenging thing he’d ever done.
You couldn’t help but laugh again. However this time, you were the one blushing.
#my writings#detroit become human#dbh#rk900#rk900 x reader#rk900 x reader fluff#fluff#rk900 fluff#dbh nines x reader#dbh nines#rk900 nines#dbh fics#dbh fic#ao3#dbh fandom#dbh fanfic
28 notes
·
View notes
Text
GOOFY SONGS DOUBLE TIME
scout: an eye for an eye, a leg for a leg. when i think of jeremy, i think very specifically of hayloft II by mother mother. i think the blu scout has a bird’s eye view of his life that his counterpart simply doesn’t have. he understands on a deeper level what his family does for a living; he’s intimately aware of what decisions his mother has made to keep the family afloat. this baby has a gun, and everyone better run or face what comes out of the barrel. and scout’s choice of work hasn’t removed what his family would consider his good traits. but they really don’t even know him anymore to make that determination.
soldier: you fire off missiles because you hate yourself, but do you know you're demolishing me? jane doe does indeed get renegade by big red machine, featuring t swizzle. an absolute undiscovered banger in the global pop star's discography, really i can see anyone (but particularly demo) yelling this to the soldier. soldier is, to me, a man who is plagued by his own thoughts. and his own thoughts will halt him faster than any outside force ever could, but he won't recognize or care about that until it's just too late. becuase really, is it his anxiety that stops him from giving everything, or does he just not want to?
pyro: pyro does get join the party by jt music. frankly any fan made fnaf song can slide right into this position but i think join the party is the best fit here. i love the juxtaposition of the verses and the chorus. the seriousness of the verses against the lightness of the chorus i feel wraps pyro up nicely. they can tell there’s something more happening, they just can’t confidently parse through the truth from the delusions. doesn’t mean they aren’t paying attention.
demo: okay, so is there a reason nobody likes cosmo sheldrake? because from what i see he hasn't done anything so egrergious that when i mention the man everyone groans except have a couple songs that went viral on tiktok. anyway, demoman gets the moss by cosmo sheldrake. i just think it's a very demo song, that intertwines this theme that i like to think about with demo, of what it means to be an educated man and also hold space in his heart for his culture. i think that it's particularly something the blu demo struggles with more than his counterpart does, but it's because he struggles with it more that he's learned where his balance is with it.
heavy: so here’s a confession— i love starbomb. love game grumps, love starbomb. and the only song i think of when i think of this guy is krobus is hot. i do think mikhail has regular fantasies of massacring the team. and i think the repetition of his day to day life grinds his nerves into dust when he takes a step back and looks at it. he remembers when it used to be fun. he remembers when the violence was fun. now it’s just there and he does it because there’s nothing else to do. he just wants to do something different. he wants something to change and he’s the only one who can do that for himself. and with the respawn machine its so tantalizing… they’ll just come back!
engineer: another living tombstone song that took a moment of pondering to decide who to give it to. epoch by the living tombstone is one of my favorite songs in the living tombstone’s discography, if not one of my favorite songs of all time. it is one i return to time and time again. a grim promise. “i know we can make it better than it ever was”. to go into how deeply i associate that one line with dell would require an entire essay in and of itself, and i don’t want to write it, so i’ll sum it up with: regret is a hell of a drug, and generational trauma will make it worse.
medic: you know who is an underrated artist? blue kid. i can’t remember where i first heard the dismemberment song, but looking back i don’t see how it could be anything other than the doctor. it has a lightness, a softness and an intense driving force that truly only fritz can accomplish. and i’m serious, the blu medic shines in battle. i’ve never been focused down more as a red member than with a blu medic. and i let them kill me, i love those funky little blue doctors. all medics get a free kill on max. but i really think about the bridge, specifically “i’m taking your narrative, and i’m making it mine”. i think the blu medic tries so hard to not be the red medic he ends up just like him. just with different emphases. and it works! it’s charming, in its own freaky little way! but it’s not what he wants to be.
sniper: i think both snipers would like ska, and that’s why i’m picking smash mouth’s walking on the sun for our blu aligned australian. what one could argue is a chiller song with a faster pace. i’ll even take it a step farther and say he’s very specifically walking on the sans, which is just walking on the sun with sans (yes undertale sans) as the instrumentals. it’s ear grinding, most of the blends are bad regardless of where you go to listen to it. “some were spellbound, some were hellbound, some they fell down, and some got back up and fought against the meltdown” which i think is a fair statement to assign to snipes. blu snipes may not be as perceptive as his counterpart on the field, but that doesn’t mean he’s not watching what’s going on around him. he just prefers to go through it with a lightness his counterpart can no longer conjure or perform.
spy: it took me a second to decide who got this song; but i do think my ordinary life by the living tombstone goes to the blue clad frenchman. and there’s a couple of lyrics i really think about that i really assign to spy; particularly “they let me lie to them and don’t feel like they’ve been misled”. like that is a canonically spy statement, it’s his whole job on the team— to infiltrate and destroy from the inside. but also because i do ponder over the sheer amount of imposter syndrome he’s probably dealing with. i wonder how he copes with it.
#team fortress 2#team fortress two#tf2 medic#tf2 heavy#tf2 sniper#tf2 engineer#tf2 pyro#tf2 scout#tf2 spy#tf2 soldier#tf2 demoman#tf2 demo#oh! by the way!#ten. :)
36 notes
·
View notes
Text
Time period post: Car culture 2 - Hot rods and racing

Not going to lie, this one was hard for me as I don’t have that much existing background knowledge on the topic. I had to do a lot more digging than usual so they’ll be some more links at the end of this. But I figure it’s an important thing to learn especially when thinking about the Curtis gang, as they are “car” Greasers, if you want to subcategorize them. Pony does mention supped up cars and it’s established several of the guys race or like to watch them— at the very least they all speed and enjoy it. While they might not be full on Hot rodders themselves it could simply be useful to flesh out the world/understanding of the corner of car culture.
Also please if you do know more about cars and car history please do add on! These are intended to be informative and helpful so all help is appreciated lol.
Hot rodding-

It started by necessity, post war lack of cars being made or perhaps not affording a brand new car you go around to shops and yards and you get parts and modify till you have a nicer working machine. Then it starts being picked up in racing and the youth and a hobby within itself.
At its core all a “Hot rod” is, is a souped up car. Modified usually for the sake of performance- primarily speed. Parts are taken out or added, the fronts lowered, the engine is bigger and so on. The culture started in the 20s-30s and a lot of the real examples of these kinds of cars are older, it’d be done with modern cars (40s-60s) but hardcore fans stuck with classics. Also mind you a lot of these unique modifications were not street legal or just barely skirting the line.
These modifications are to show off. Not just in performance but also looks. Really a lot of the time it’s hanging around showing off your work to each other and putting it to the test, major mods are still in car shows today. Mainly show than racing— as far as I’m aware. Things things need constant attention and tweaking too, not just as a way to improve the next time you race but because it’s not a standard car. All these little things add up.
The hobby of making and racing these cars really ramped up in the 40s-50s but by the mid 60s there was a craze. It was featured in a lot of media, there were industries catering now and magazines! Car craft, hot rod, etc.

Drag racing-
Now here it gets alrighty complicated as there is a split off into drag racing becoming somewhat of a legitimate sport — which let’s be real the kind any of the guys on the East side were engaging in was not “official” or completely legal. Also not to be confused with street racing fast and furious baby driver type— though it could be on a street/public street that’s not the same thing necessarily. Too many turns and hindrance and a totally other type of reckless. Really frustrating to look up as you get more of the regulated sport than the actual thing.
Their type of drag race is a street race, set out on a short, straight stretch of road side by side. Usually on the edge of town or a start of a highway or route- a long straight stretch out of the way. In movies at least there’s always a dramatic local name for it too like “Thunder road” or “dead man’s curve” something like that. It gives you a notable location besides “uh you know that one place out uh by uh.” Lol “Burn out” / “burn rubber” is essentially burning off a part of the wheel onto the road at the start- it helps with traction. Don’t ask me how. Straight forward, first to cross the finish line wins (in this case probably a land-marker or a set point than literal)
Why race? Largest sited -> Community, excitement, bets 
You do not need a hot rod to race, it’s not a requirement mind you but it’d certainly help. Added to, not all cars were extremely tricked out deconstructed art pieces, you could’ve had a lot of not visible work done on the car etc.
A good character to look to in regard to Hot rodding and racing is John Milner in American Graffiti (1973).
Club car v hot rod-
A club car is basically the main car of a Social club, as say many members don’t have a car or just it fits the most members to go riding around with. (Remember, bench seats you could fit so many more people in a car). It could be a hot rod, more likely it’s just a car.

Social club v gang-
A social club is a pretty broad term that’s just a group of people brought together by a common interest. By definition a book club is a social club, technically. Now, there were a lot of Juvenile delinquent aligned social club types- based around cars or just the general lifestyle. Sometimes there was little difference at all and it just gave a gang more legitimacy/threw police off as it made them seem more reputable.
So really the big difference between a Greaser gang and a Greaser social club is police and public scrutiny. Clubs had a - I wouldn’t say easier time but slightly more trust? However, this also isn’t to say that all social clubs are fronts for a gang as it can just be tough looking kids but chill and mostly law abiding.
Clothing too, before the internet identity was built and shown in person. How you dressed, worn literally on your sleeve. Not like the remnants of today but fully living and showing yourself and your affiliations through appearance. Having a group look, or group jackets- the conformity in a way was individuality. It’s a bit paradoxical. But club jackets, the hair, the jeans etc are a great example of this.

(Really funny aside but looking up social clubs and it’s Rockstar telling me how to join a gang in GTA)
Junkyards-
Still very much in existence but a great place to scavenge for just about anything but especially car parts. Same goes for salvage yards, wrecking yards, second hand parts shops etc. Greasers, and other car culture staples would be overflowing in shop classes and mechanic shops (stereotypically anyway) so there’s the knowledge and the access there already. Sometimes a part or two might go missing… who knows.
Really I’ve started thinking about yards like this more as I visit the country more and someone’s yards filled with rusted cars and a bunch of parts and stuff like this. Like part of it might just be someone leaving it to rot but another part is— yeah they’re pretty valuable communitively. Not just to car loving teenagers! Hell the Curtis’s have a few dead cars in their yard (and a bunch of boards ??? And supplies for some reason???) and the lot appears to have at some point been a dumping ground.



#the outsiders#Curtis gang#writing help#writing reference#1960s#cars#vintage cars#Classic cars#greasers#time period post#time period post: hot rods and racing
32 notes
·
View notes
Text
Wife Goals: Karlach Cliffgate
Today, on the second-to-last update of my prolonged emotional breakdown masquerading as a series of blog posts about fictional female characters, we're following the red-clad, horned-crown-wearing, axe-weilding empress Edelgard von Hresvelg with the red-skinned, horn-bearing, axe-wielding tiefling barbarian Karlach Cliffgate from an obscure video game called Baldur's Gate 3!
Let's see, how best to describe Karlach's appeal, hmm... Oh, I've got it.
Karlach is a blade of grass growing out of a crack in the pavement of an abandoned parking lot.
Every detail you learn about this woman's life story paints a picture of someone being moved from one hostile environment to another, each less habitable than the last. She's a tiefling - i.e. the descendant of people who made deals with devils, and thus born with devil features as a result of her ancestor's past deeds, or at least so everyone believes. Given how clearly negative that bit of lore is, would you be surprised to know that tieflings are often subjected to bigotry?
On top of that, she was born to a poor family in a big city, and basically spent her youth as a street urchin. Her parents died when she was just reaching adulthood, but luckily Karlach was taken in by an up-and-coming politican. Unluckily, said politican sold her as a slave to an archdevil named Zariel, who forced Karlach to act as a gladiator for entertainment. Oh, and she had Karlach's heart taken out and replaced with a demonic engine, obstensibly to make Karlach a better fighter, although the fact that said engine will slowly break and burn Karlach to death if she ever leaves literal Hell is probably the bigger reason - gotta keep your gladiator in line, after all. Karlach managed to escape, only to be chased by a bountyhunter and then captured by brain-eating squid people and infected with a larval form of said squid people that would eventually devour her soul and turn her into said squid-person - and if that doesn't kill her first, her slowly malfunctioning infernal engine for a heart will! And that's when you, another victim of said squid-people, meet her!
This is a woman who has suffered figurative AND literal Hell several times over in her life. She's faced bigotry, lost her family, been betrayed by her mentor, been forced to fight to the death for the amusement of literal devils, had her brain invaded, and literally had her heart broken and torn out of her chest. She is literally and inescapably doomed in at least two different ways to die a horrible and agonizing death. Take every reason you could give a character to have a bleak, pessimistic outlook on life and Karlach's got it in spades.
But she refuses to do so. Like that blade of grass surrounded on all sides by dead, suffocating pavement, Karlach reaches towards the light of the sun and the life it promises. Where the world gives her a million reasons to despair, she holds out hope and strives to live and find joy.
Karlach lived in poverty, but scraped and strived and found work to keep herself going. She was literally sent to Hell, but she fought tooth and claw until she could escape. Squid-men put a parasite in her brain that will devour her from the inside out, but it hasn't yet, which means she has time to try and find a cure. Her heart has literally been torn out and replaced with a machine that will kill her unless she goes back to Literal Hell. Fuck that, she'll find someone to fix it, or die trying, and she'll enjoy every single moment she has as a free woman, no matter how brief they are.
It would be easy to write off Karlach's optimism as naivete or blind hope, but this is not a woman who is unaware of how cruel life can be. In fact, it is her extensive experience with the cruelty of life that has made her determined to be kind and hopeful. You don't escape Hell by giving into despair - you escape by constantly looking for your chance to get out, by striving to reach that crack in the pavement where you can see the sunlight.
Karlach is not the only character who's suffered in Baldur's Gate 3 - in addition to ALL of your (initial) companions having brain parasites in them, they each have their own specific baggage dealing with abusive assholes in their life. Some have reacted to it with denial, some with pessimism, some with self-loathing. But all of them also recognize that in many ways, Karlach is more fucked than they are - she's the only one who seems guaranteed to die no matter what they do. And I think it's notable that every companion expresses affection for Karlach - while they'll quibble with each other over various things, everyone, from the vampire to the warrior toad woman to the cleric of the Goddes of Gaslighting, loves Karlach. Hell, even if you recruit the optional and most explicitly evil companion in the game, you'll find that she loves Karlach too!
And they love her because there is value in her strength and hope. All of the characters in this game are facing impossible odds against unfathomably awful foes, all of them are risking a horrendous death, and Karlach, the most Fucked out of all of them, is able to hold out hope for a miraculous survival, and willing to fight as hard as a fucking devil to get that miracle. How can that not be inspiring to them? How can they not love her?
And thank god they do, because the woman needs love desperately. After spending a decade in Literal Hell, Karlach has been starved of every kind of affection and intimacy. She has had no friends, no family, and, you know, no lovers. Escaping Hell doesn't fully solve the issue, either - while Karlach can make new friends when you recruit her to your party, the infernal engine makes her body run so hot that touching her for more than a brief second can literally leave you with burns. It's a great example of her whole overarching conflict in the story - this is a woman who wants to rebuild her life into something pleasant, but the trauma she's suffered is literally keeping her from connecting with people fully.
But you can help her find ways to repair that engine, to make her fire run a bit cooler, and if you do, well, Karlach gets to make that connection. Her trauma, now maintained by people who love her as much as she wants to love them, finally subsides enough for her to connect with others again.
Unfortunately the engine isn't fully repairable - as you find out, the only way it can possibly be fixed and removed requires Karlach to go back to Hell, which she adamantly refuses to do. She wants to live and be happy, but confronting the thing that hurt her is too much. She tries to convince herself that she's content with the time she has, even if it's brief, so long as she can be happy and with her friends and out of Literal Hell while doing it.
By the end of the game, you and her both confront what a lie that is. Karlach wants to live, she wants it so badly, but there is so much pavement, and a crack of sunlight alone isn't enough. At the end of the game, you and she face a choice: her engine begins to explode, and she can either accept the end, or go back to Hell. She refuses the later option - unless you offer to go with her, to help her find a way to fix her engine by confronting the source of it and the trauma it inflicted. And, personally, I don't see how you can deny that request. This woman had so much strength and hope for you and everyone else in your party - isn't she owed that strength and hope in return?
The thing about that blade of grass in the parking lot is that every day the pavement cracks a bit more, and if given time, more blades of grass will spring up, and slowly but surely they'll tear the pavement away and grow lush and green. A lot of fans have complained that Karlach's ending is so full of uncertainty - that the game doesn't end with her engine fixed, but either with her burning to death or, in the marginally happier ending, returning to Hell with you in tow to try and find another miracle.
But that's who Karlach is - she's a person surrounded by death, facing impossible odds, who hefts a big axe and carves her way to a miracle in spite of it all. She'll live, of course she will. You don't need to see it happen, because you've already seen it happen. The blade of grass is already reaching up through the pavement and basking in the sunlight. She's going to live. We're going to live. The hope isn't naive - it's what we need to pull through.
...
Also she's huge and sweet and pretty and she could throw me over her shoulder and carry me to safety in her big muscular arms should the need arise and I love her.
26 notes
·
View notes
Text
Jedi-related Technology — Light of the Jedi
These were the crafts of the Jedi Order, their Vectors. As the Jedi and the Republic worked as one, so did the great craft and its Jedi contingent. Larger ships exited the Third Horizon’s hangars as well, the Republic’s workhorses: Longbeams. Versatile vessels, each able to perform duties in combat, search and rescue, transport, and anything else their crews might require.
The Vectors were configured as single- or dual-passenger craft, for not all Jedi traveled alone. Some brought their Padawans with them, so they might learn what their masters had to teach. The Longbeams could be flown by as few as three crew, but could comfortably carry up to twenty-four — soldiers, diplomats, metics, techs — whatever was needed.”
“The Vectors were as minimally designed as a starship could be. Little shielding, almost no weaponry, very little assistance. Their capabilities were defined by their pilots. The Jedi were the shielding, the weaponry, the minds that calculated what the vessel could achieve and where it could go. Vectors were small, nimble. A fleet of them together was a sight to behold, the Jedi inside coordinating their movements via the Force, achieving a level of precision no droid or ordinary pilot could match.
They looked like a flock of birds, or perhaps fallen leaves swirling in a gust of wind, all drawn in the same direction, linked together by some invisible connection…some Force. Bell had seen an exhibition on Coruscant once, as part of the Temple’s outreach programs. Three hundred Vectors moving together, gold and silver darts shining in the sun above Senate Plaza. They split apart and wove into braids and whipped past each other at incredible, impossible speed. The most beautiful thing he’d ever seen. People called it a Drift. A Drift of Vectors.”
“[…] Weapons on a Vector could only be operated with a lightsaber key, a way to ensure they were not used by non-Jedi, and that every time they used, it was a well-considered action.
An additional advantage— the ship’s laser could be scaled up or down via a toggle on the control sticks. Not every shot had to kill. They could disable, warn…every option was available to them.”
“They were riding in another vehicle customer-designed by Valkeri Enterprises for the Jedi — a Vanguard, the land-based equivalent to the Vector. It was also sometimes called a V-wheel, even though the thing didn’t always use its wheels to get around. Every Jedi outpost had at least one as part of its standard kit, and the machine was engineered to operate in all of the planetary environments in which those stationed were situated [?]. It could operate as a wheeled or tracked ground transport, or a repulsorlift speeder for ground too rugged for tank treads. A Vanguard even had limited utility as an amphibious or even submersible vehicle, being able to seal itself off entirely as needed. It could do everything but fly, and that came in handy on Elphrona, where the planet’s strong magnetic fields made certain regions utterly inhospitable to flying craft.
The overall aesthetic was analogous to Vectors — smooth, sleek lines, with curves and straight edges integrated into an appealingly geometric whole. Behind the seats in the driver’s cabin — currently occupied by Indeera Stokes and Loden Greatstorm — was a large, multipurpose passenger area, with space to store any gear that a mission might require. Vanguards were more rugged than Vectors, but were built with many of the same Jedi-related features as their flying cousins. The weapons systems required a lightsaber key, and many of the controls were mechanical in nature, so as to be operated — in an emergency — via an application of the Force rather than through electronics.
No Jedi would use the Force to accomplish something as easily done with their hand — but lives had been saved by the ability to unlock a Vanguard’s hatch from a distance, or fire its weapons, or even make it move.”
“Indeera slipped past them to the rear of the vehicle, where its two Veil speeders were stored on racks, one above the other. Like all the Valkeri Enterprises built for the older, they were designed for Force-users, and as such were delicate, highly responsive machines. Little more than a seat strapped to a hollow duralium frame, with a single repulsor and four winglike attachments that sprang from its side, a Veil was basically a flying stick. But if you knew how to to ride them, they were incredibly fast and maneuverable. A group of skilled riders, with lightsabers out and ready, could take down entire platoons of armored vehicles while sending blasterfire back at attackers.”
“At the moment, she was aboard the Ataraxia, the Jedi’s beautiful, elegant starship, almost a temple in and of itself.”
“Another ship was visible on his display, outside his command authority but certainly an ally: the Ataraxia, the one large starship under the direct control of the Jedi Order. It was a beautiful ship, designed to subtly evoke the Order’s symbol with its hull and sweeping, curved wings accented in white and gold.”
19 notes
·
View notes
Text





Cosmo Klein (1978) by Jeff Duntemann AKA "Captain Cosmo", Rochester, NY. Cosmo Klein is based on the COSMAC Elf RCA 1802 microcomputer and features a robot arm, and a CRT face separately controlled by a COSMAC VIP, an 1802 based microcomputer with a supplementary video display chip.
"For all its flaws, the VIP is probably worth the money… The worst thing about the VIP is something that can be said of the ELF-II from Netronics or Quest's Super ELF: If you don't wire wrap it yourself, you won't learn as much. What are you doing this for? If you want to learn microcomputer hardware and software without going broke, the Popular Electronics ELF has no equal. …
COSMO'S FACE -- I take that back; there is something that the VIP is good at: Giving my robot a face. For a while I've been tinkering with a clanking heap of surplus submarine parts and wheelchair motors named Cosmo Klein. The Klein is an obscure mathematical allusion to the Klein Bottle, whos insides are identical to its outsides. Cosmo is a little like that, especially when he tips over and sends his insides spilling out onto the floor. Well, I got the notion that a COSMAC-generated face would be a marvelously humanizing touch. And so it is. If you want to see a good color picture of Cosmo and my VIP (with my own idiotically grinning mug in the background) check out Look Magazine dated April 30, 1979; it's the one with Jane Fonda on the cover. Maybe your library has it. The program which generates the face is included in this book, so I won't describe it here. Though you can't see it, my ELF is also inside, vainly trying to keep the monster from falling on his face. A CMOS robot is an old dream of mine, and I'm working on it, but for now I must pronounce his control circuitry (save for his face) a failure. Now you know who Captain Cosmo is. Yes indeed, that cute cartoon on the cover has a real model." – Captain Cosmo's Whizbang, by Jeff Duntemann, 1980.
“In addition to the VIP on his chest (which managed his face video and nothing else) he had a wire-wrapped machine inside his body, and a built-in OAE paper tape reader for getting his software up and running. (I punched the tapes on a DEC PDP11 system at Loyola University, where a friend worked at that time. The code was all written in binary, by hand.)” – Jeff Duntemann, Meet Cosmo Klein, COSMAC ELF.
"Cosmo Klein, a 4' tall robot with a TV-screen face, is a mutt bred from "junque" and computer chips. Cosmo has a World War II navy sonar-console body which was bought at a rummage sale for 25 cents and houses a homemade computer that monitors internal functions, like voltage regulation, speed, motion, and Arm and hand action. Cosmo lives with Jeff and Carol Duntemann. Jeff is a Xerox engineer, science-fiction writer, and member of a group of "techies" who build futuristic gadgets. He has grander inspirations than Cosmo. "What I'm looking toward in maybe 40 years is a robot that will act as a companion to the emotionally disturbed and the severely retarded. The patience of machines is marvellous. They'll sit there and listen and talk back." " – A Robot for Every Home, by Lauren Freudmann, Look Magazine, April 30, 1979.
42 notes
·
View notes
Note
do all the mods confessiony-sonas have different personalities/lore/other stuff? the designs are all really cool and I wanna know more about them :3
Paradox/Posy (or Catalog when they were alive) is from The Nightly Manor! They were alive before sketchpad's appearance and actually died bc of his arrival.
Paradox was your standard librarian, but they also were an mad/evil scientist on the side. Think Doofenshmirtz, comically bad at being evil. Literally no one knew about the deeds they wanted to accomplish.
I'm not sure about the other confessionysonas' lore, but! Thats mine at least.
-🗄️
Boom was a stick of dynamite from the TTR-verse who exploded and perma-died after TARDIS uh.. well, disabled all recovery centers. They were always called Boom, but after death immediately took the chance to change their appearance to what they are now (the explosion, rather than the dynamite, with goatlike features)!
They mostly wanted to change their appearance so people wouldn't figure out what universe they were from, or have no association with it. They have absolutely no love for TARDIS, and has no idea if they're Plot-relevant or even able to be found that way, but they do NOT want to risk it in any way.
They also don't want to get recovered anymore- they kind of like being a ghost!
When they were alive though they were mostly just a normal guy who was a fan of Twisted Turns and tried to get into the reboot once lmao.
-💥 cog comes from a world of what you would call a "dystopian super-technological future", and is one of the lead developers and lead AI/machine learning engineers. long story short AI, robots and what have you took over and killed off most of the universe, the first victim being the object who gave them life! with practically no one left on their universe, there is nobody to recover them, so... yeah, they're just a ghost now. fun fact, the inside of my confessiony-sona is actually cogs office, where they were killed :D - ⚙️
oh i'm a bit late... well! olive(r) is from the ii universe. this is stolen directly from my normal objectsona/selfship lore (democracy decided olipad is canon so i dont feel bad) lol. he is not mephone generated btw! i actually had to do a bit of rewriting because of ii 16 and decided that the rest of the production crew he worked with was generated but him? he simply got a bit lost. and well, long story i've barely written short, he Died. Badly.
now he spends his afterlife doing stereotypical ghost shit, you know standing at the end of hallways and whispering in earshot of the living. just freaking people out. he finds it hilarious.
he wants to go back, but he's accepted that he's just a ghost now. So he's trying to help out confessiony and comicy with this whole "oscc" thing.
-🫒
31 notes
·
View notes
Text

Life is a Learning Function
A learning function, in a mathematical or computational sense, takes inputs (experiences, information, patterns), processes them (reflection, adaptation, synthesis), and produces outputs (knowledge, decisions, transformation).
This aligns with ideas in machine learning, where an algorithm optimizes its understanding over time, as well as in philosophy—where wisdom is built through trial, error, and iteration.
If life is a learning function, then what is the optimization goal? Survival? Happiness? Understanding? Or does it depend on the individual’s parameters and loss function?
If life is a learning function, then it operates within a complex, multidimensional space where each experience is an input, each decision updates the model, and the overall trajectory is shaped by feedback loops.
1. The Structure of the Function
A learning function can be represented as:
L : X -> Y
where:
X is the set of all possible experiences, inputs, and environmental interactions.
Y is the evolving internal model—our knowledge, habits, beliefs, and behaviors.
The function L itself is dynamic, constantly updated based on new data.
This suggests that life is a non-stationary, recursive function—the outputs at each moment become new inputs, leading to continual refinement. The process is akin to reinforcement learning, where rewards and punishments shape future actions.
2. The Optimization Objective: What Are We Learning Toward?
Every learning function has an objective function that guides optimization. In life, this objective is not fixed—different individuals and systems optimize for different things:
Evolutionary level: Survival, reproduction, propagation of genes and culture.
Cognitive level: Prediction accuracy, reducing uncertainty, increasing efficiency.
Philosophical level: Meaning, fulfillment, enlightenment, or self-transcendence.
Societal level: Cooperation, progress, balance between individual and collective needs.
Unlike machine learning, where objectives are usually predefined, humans often redefine their goals recursively—meta-learning their own learning process.
3. Data and Feature Engineering: The Inputs of Life
The quality of learning depends on the richness and structure of inputs:
Sensory data: Direct experiences, observations, interactions.
Cultural transmission: Books, teachings, language, symbolic systems.
Internal reflection: Dreams, meditations, insights, memory recall.
Emergent synthesis: Connecting disparate ideas into new frameworks.
One might argue that wisdom emerges from feature engineering—knowing which data points to attend to, which heuristics to trust, and which patterns to discard as noise.
4. Error Functions: Loss and Learning from Failure
All learning involves an error function—how we recognize mistakes and adjust. This is central to growth:
Pain and suffering act as backpropagation signals, forcing model updates.
Cognitive dissonance suggests the need for parameter tuning (belief adjustment).
Failure in goals introduces new constraints, refining the function’s landscape.
Regret and reflection act as retrospective loss minimization.
There’s a dynamic tension here: Too much rigidity (low learning rate) leads to stagnation; too much instability (high learning rate) leads to chaos.
5. Recursive Self-Modification: The Meta-Learning Layer
True intelligence lies not just in learning but in learning how to learn. This means:
Altering our own priors and biases.
Recognizing hidden variables (the unconscious, archetypal forces at play).
Using abstraction and analogy to generalize across domains.
Adjusting the reward function itself (changing what we value).
This suggests that life’s highest function may not be knowledge acquisition but fluid self-adaptation—an ability to rewrite its own function over time.
6. Limits and the Mystery of the Learning Process
If life is a learning function, then what is the nature of its underlying space? Some hypotheses:
A finite problem space: There is a “true” optimal function, but it’s computationally intractable.
An open-ended search process: New dimensions of learning emerge as complexity increases.
A paradoxical system: The act of learning changes both the learner and the landscape itself.
This leads to a deeper question: Is the function optimizing for something beyond itself? Could life’s learning process be part of a larger meta-function—evolution’s way of sculpting consciousness, or the universe learning about itself through us?
7. Life as a Fractal Learning Function
Perhaps life is best understood as a fractal learning function, recursive at multiple scales:
Cells learn through adaptation.
Minds learn through cognition.
Societies learn through history.
The universe itself may be learning through iteration.
At every level, the function refines itself, moving toward greater coherence, complexity, or novelty. But whether this process converges to an ultimate state—or is an infinite recursion—remains one of the great unknowns.
Perhaps our learning function converges towards some point of maximal meaning, maximal beauty.
This suggests a teleological structure - our learning function isn’t just wandering through the space of possibilities but is drawn toward an attractor, something akin to a strange loop of maximal meaning and beauty. This resonates with ideas in complexity theory, metaphysics, and aesthetics, where systems evolve toward higher coherence, deeper elegance, or richer symbolic density.
8. The Attractor of Meaning and Beauty
If our life’s learning function is converging toward an attractor, it implies that:
There is an implicit structure to meaning itself, something like an underlying topology in idea-space.
Beauty is not arbitrary but rather a function of coherence, proportion, and deep recursion.
The process of learning is both discovery (uncovering patterns already latent in existence) and creation (synthesizing new forms of resonance).
This aligns with how mathematicians speak of “discovering” rather than inventing equations, or how mystics experience insight as remembering rather than constructing.
9. Beauty as an Optimization Criterion
Beauty, when viewed computationally, is often associated with:
Compression: The most elegant theories, artworks, or codes reduce vast complexity into minimal, potent forms (cf. Kolmogorov complexity, Occam’s razor).
Symmetry & Proportion: From the Fibonacci sequence in nature to harmonic resonance in music, beauty often manifests through balance.
Emergent Depth: The most profound works are those that appear simple but unfold into infinite complexity.
If our function is optimizing for maximal beauty, it suggests an interplay between simplicity and depth—seeking forms that encode entire universes within them.
10. Meaning as a Self-Refining Algorithm
If meaning is the other optimization criterion, then it may be structured like:
A self-referential system: Meaning is not just in objects but in relationships, contexts, and recursive layers of interpretation.
A mapping function: The most meaningful ideas serve as bridges—between disciplines, between individuals, between seen and unseen dimensions.
A teleological gradient: The sense that meaning is “out there,” pulling the system forward, as if learning is guided by an invisible potential function.
This brings to mind Platonism—the idea that meaning and beauty exist as ideal forms, and life is an asymptotic approach toward them.
11. The Convergence Process: Compression and Expansion
Our convergence toward maximal meaning and beauty isn’t a linear march—it’s likely a dialectical process of:
Compression: Absorbing, distilling, simplifying vast knowledge into elegant, symbolic forms.
Expansion: Deepening, unfolding, exploring new dimensions of what has been learned.
Recursive refinement: Rewriting past knowledge with each new insight.
This mirrors how alchemy describes the transformation of raw matter into gold—an oscillation between dissolution and crystallization.
12. The Horizon of Convergence: Is There an End?
If our learning function is truly converging, does it ever reach a final, stable state? Some possibilities:
A singularity of understanding: The realization of a final, maximally elegant framework.
An infinite recursion: Where each level of insight only reveals deeper hidden structures.
A paradoxical fusion: Where meaning and beauty dissolve into a kind of participatory being, where knowing and becoming are one.
If maximal beauty and meaning are attainable, then perhaps the final realization is that they were present all along—encoded in every moment, waiting to be seen.
11 notes
·
View notes