#input_text
Explore tagged Tumblr posts
Text
The Large Language Models tested by Phyton
Large language models (LLMs) are a type of artificial intelligence system that has been trained on vast amounts of text data. They are designed to understand and generate human-like language, making predictions on what words or phrases might come next in a sentence or document. These models use complex algorithms and neural network architectures to learn from the data and improve their performance over time. Some well-known large language models include GPT-3 from OpenAI and BERT from Google.
To create a large language model in Python, you can use libraries such as Hugging Face's Transformers, which provides a simple and efficient API for using and fine-tuning LLMs. Below is a sample code snippet using the Transformers library to load a pre-trained GPT-2 model and generate text:
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load pre-trained model and tokenizer
model = GPT2LMHeadModel.from_pretrained("gpt2")
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# Encode input text
input_text = "Your input text here"
input_ids = tokenizer.encode(input_text, return_tensors="pt")
# Generate output text
output = model.generate(input_ids, max_length=100, num_return_sequences=1, no_repeat_ngram_size=2, top_k=50, top_p=0.95, temperature=0.7)
# Decode and print the output
output_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(output_text)
```
This code snippet uses the GPT-2 model and tokenizer from Hugging Face's Transformers library to generate text based on an input prompt. You can customize the generation parameters such as `max_length`, `top_k`, `top_p`, and `temperature` to control the diversity and quality of the generated text.
Large language models have various applications in natural language processing, including text generation, translation, summarization, and more. By fine-tuning these models on specific tasks or domains, they can be adapted to generate high-quality content for a wide range of applications.
RDIDINI
PROMPT ENGINEER
0 notes
Text
Absolutely love all the customization points! They're very well thought out, especially the reasons, hobbies, jobs, and how they affect the time you spent with the different characters.
Under the cut I'll add some code for specific ideas and go into further detail of how I personally made them and general rambling ideas, I hope it helps.
I obv don't know how far you're into coding, but I assume you already have all the create pronouns and that they are set to they/them by default (if not, using they/them as a default is easiest, since all the different pronouns are actually written differently, unlike for example, she/her/her/hers, to help the code not be confused). Besides just setting the default option to other pronouns, you can also give the option make custom pronouns! It's done exactly how you are letting your mc choose their own name, by for example using the code "*input_text they" for inputting the subjective pronoun.
I do wanna ask how you want to handle the trans part, esp the part when they transitioned, you said it will happen while they're away, but are there options to let the player chose when they realized it? Like while they were away, they realized they're trans, or if they always have felt that way and only chose to act on it when away, are they choosing to transition at all, and when or even if the family knew about it beforehand or if the mc suddenly just turned up at home fully transitioned /lh
the amount of physical appearance customization you are giving? I'm living for it, also having multi toned hair?? i don't know if i have seen that before and i am on my knees for it. I do think adding an input option for the hair dyed part could be something to consider. Allowing hair texture and hairstyle to be two different option gives so much freedom on the specific image people have for their mc.
Also, if there is a big difference in the appearance between highschool and drop out mc, would you consider having flavor text for that? Not asking to push you into it, I'm just thinking about how much extra flavor text that could end up being and if that might burn you out a little bit?
The amount of customization is huge, and I love it. I'm already daydreaming about the different way to let you chose to customize during the story. I don't want to scare you away from the amount of customization you want to give, but i can tell you its a lot. If you need any help with any coding or with errors or anything, I would love to help you (aka pls be my friend), if its through sharing code, error problems/hunting, i would do anything for you.
Dropout Customization
Due to some questions about MC customization, I have decided to compile all the physical and personality aspects that are selectable about the Dropout.
A reminder that this is all subject to change and that new things may be added (or deleted). Feedback and ideas to further develop MC are encouraged.

Main Ideas
Name Surname
Sex, Gender, Pronouns, Breast/Pecs, Penis/Vagina (If MC is transgender, their transition takes place while they're away)
Birthday which establishes the Dropout's age as either 21 or 22 depending on the season (Spring, Summer, Autumn, Winter)
Major (Engineering, Biology, Chemistry, Computer science, Law, Economics, Education (in relation to science/maths/etc), Mathematics, Physics, Psychology). The Dropout's Major affects flavor text. These options are the ones approved by the Dropout's parents, though it's possible for MC to express interest in other degrees/topics.
Reason behind dropping out (MC got kicked out (they were caught cheating) MC didn't get a high enough GPA and dropped out / MC never even wanted to go to college and ultimately decided they wanted out / MC didn't fit in (they were discriminated, lonely, etc) though they really liked college / MC originally liked their degree and college but gradually lost interest in the entire thing / MC never liked their degree and decided to drop out / Something specifically related to mental health (mainly anxiety) / Impostor's syndrome.) This affects flavor text.
2 Coping Mechanisms (Alcohol, Tobacco, Drugs, Sleeping around, Avoidance, Overspending, *Hobby (overworking self) [Anger, Fake/forced happiness, Sadness, Indifference].) Each coping mechanism opens a variable and a storyline. You can choose two, though choosing one related to emotional responses [between brackets] automatically blocks out the others.
2 Hobbies (Singing, playing an instrument, songwriting, creative writing, drawing, sketching, sculpting, acting, photography, soccer, football, swimming, basketball, gymnastics, boxing, judo, karate, kickboxing, going to the gym, cooking/baking, dancing.) This affects flavor text and scenes.
Job (Bartender [Wanda, Statler is also around often], Cashier [Statler], Columnist [J (+Kai if poly)], Caregiver [Kai], Waiter/Waitress [Uma (+Travis if poly)], Tutor [Travis]) Each job gives you more time with a certain RO, as well as unlocking a storyline.
Personality Stats
Physical Appearance
Playful/Serious Honest/Dishonest Friendly/Rude Introverted/Extroverted Laid-back/Uptight Cynical/Idealistic Flirty/Reserved Family oriented/Individualistic
Others: Insomnia
*It's possible to choose MC's appearance as a high schooler as well.
Height (very tall, talk, average, short, very short)
Skin tone (ebony, dark brown, light brown, russet, golden, olive, honey, tawny, tanned, fair, rosy, ivory.) Choosing any skin tone gives you the possibility of choosing to be a poc (idea I stole from Mila, @beyondthegame)
*Build (scrawny, skinny, lithe, lean, muscular, chubby, curvy, hourglass).
*Hair color (max 3 tones, 1 base and other 2) (possible to return home with a mess of dye for Maude to fix. NATURAL (Ashen blonde, Sunflower blonde, Strawberry blonde, Caramel, Honey brown, Chocolate brown, Copper, Auburn, Ruby red, Midnight brown, Jet black, Ebony black) NON-NATURAL (Pink, Violet, Lilac, Blue jade, Vermilion red, Snowy white, Silver, Emerald green, Canary yellow, Bleached).
Hair texture (kinky, very coiled, coiled, curly, wavy, slightly wavy, straight)
*Hair length (ear-length, chin-length, shoulder-length, below shoulder-length, chest-length, waist-length)
*Hair style (SHORT/MEDIUM: natural, side-parted, mullet, layered, bob, ponytail, twin ponytails, buzz fade, slick back, messy, wolf cut, bun. LONG: natural, high/low ponytail, messy, shaggy, California waves, a half updo, side-swept, bun, braid, twin braids, twin ponytails).
*Eye color (albino red, dark blue, light blue, dark green, light green, hazel, amber, chestnut brown, chocolate brown, black, grey).
Others
*It's possible to choose MC's appearance as a high schooler as well.
*Glasses (yes, no, contacts)
*Facial hair (No/shaved. Stubble, full beard, goatee, ducktail,van dyke, garibaldi, mustache, soul patch, light beard).
Scars, can choose as many as possible (Back, chest, abdomen, upper and lower arm, thigh, knees, calf, mouth area, neck, cheek, hands, eye area, shoulder)
*Tattoos (One big in X body area, patch-like bodysuit, bodysuit, one/two sleeves, just legs, a few tattoos all over, a small in X place).
*Piercings (Ears [helix, lobe, industrial], navel, tongue, nose ring and septum, eyebrow, lips, smiley, nipples, genital)
Dimples
Braces
Freckles (face, body, both)
*Outfit/Style (streetwear, alternative, cute, preppy, casual, formal, business casual, dark academia, messy, boho/eclectic)
*Bedroom, at family home and at new apartment (messy, colorful, emo, basic, boho, modern, industrial, vintage, minimalist, cute)
*Diet (vegan, vegetarian, pescetarian, keto, meat-eater)
Family pet (small/large dog, cat, fish tank, hamster/rabbit/guinea pig, cockatrice/parrot/canaries)
Characters
Closeness to all family members (tight-knit, close, so-so, cold, barely any relationship)
Same with the friend group
Crush on Statler during high school (yes/no)
'Popularity' during high school and college
#dropout if#dropout#cog wip#cog#choice script#mc#gameplay#interactive fiction wip#hosted games#interactive fiction#code#customization#i wanna help#pls be my friend#jk unless...#no fr im so excited
199 notes
·
View notes
Video
Input Text With Border Bottom | HTML CSS | Sekhon Design & Code
#input#text#border#input_text#input_border#input_text_border#html#html_input#html_text#html_border#html_input_text#html_input_border#html_input_text_border#css#css_input#css_text#css_border#css_input_text#css_input_border#css_input_text_border#html_css#html_css_input#html_css_text#html_css_border#html_css_input_text#html_css_input_border#html_css_input_text_border#sekhon#design#code
0 notes
Text
Sieve of Eratosthenes
I stumbled across a way of calculating prime numbers earlier called the sieve of Eratosthenes. idk who that is, but I’m guessing it’s some Greek dude.
The idea is kinda simple, you have a list of numbers that you wanna sieve (y’know, like flour). Let’s say we have the numbers 2 to 10. You start at 2 and calculate all the multiples of 2 (4, 6, 8). You do the same for 3 (6, 9). And 4 (8).
This is the sieve part. Throw away all the numbers between 2 and 10 that you got the multiples of. So, throw away 4, 6, 8, and 9. The rest are prime numbers (2, 3, 5, 7).
Pretty neat.
I figured I’d try my hand at the new programming language I’ve been learning, Rust:
use std::io;
use std::io::Write;
#[macro_use]
extern crate indoc;
fn print_prompt() {
let prompt = indoc!(
"
Sieve of Eratosthenes
*********************
Enter limit: "
);
print!("{}", prompt);
io::stdout().flush().expect("couldn't flush stdout");
}
fn get_limit() -> usize {
let mut input_text = String::new();
let parsed_limit: usize;
io::stdin()
.read_line(&mut input_text)
.expect("couldn't read input");
parsed_limit = input_text.trim().parse().expect("invalid limit");
return parsed_limit;
}
fn multiples_of(n: usize, limit: usize) -> Vec<usize> {
let mut multiples: Vec<usize> = Vec::new();
let mut k = n;
while k + n < limit {
k = k + n;
multiples.push(k);
}
return multiples;
}
fn create_vector(limit: usize) -> Vec<usize> {
let mut numbers: Vec<usize> = Vec::new();
for i in 2..limit {
numbers.push(i);
}
return numbers;
}
fn main() {
let limit;
let candidates: Vec<usize>;
let mut invalid_candidates: Vec<usize> = Vec::new();
let prime_numbers: Vec<usize>;
print_prompt();
limit = get_limit();
candidates = create_vector(limit);
for i in &candidates {
for k in multiples_of(*i, limit) {
invalid_candidates.push(k);
}
}
prime_numbers = candidates
.iter()
.filter(|i| invalid_candidates.contains(i) == false)
.cloned()
.collect();
println!(
"Prime numbers between 2 and {}:\n{:?}",
limit, prime_numbers
);
}
I quite like it. There are some weird concepts in the language, and to be honest this took me a little over 2 hours to complete when it would normally take me maybe 15 minutes in a language I was more comfortable in.
Still. It’s interesting. It was able to calculate the primes under 20 at lightning speed:
Sieve of Eratosthenes ********************* Enter limit: 20 Prime numbers between 2 and 20: [2, 3, 5, 7, 11, 13, 17, 19]
Cool.
2 notes
·
View notes
Text
Favorite tweets
For developers that make apps using home-assistant-js-websocket: We just released 7.0.3 which fixes a bug in the new subscribe entities API that would skip a state update if it was an empty string. Can cause issues for input_text values that are cleared. https://t.co/WX5SP5ZSzW
— Home Assistant Devs (@hass_devs) Apr 19, 2022
from http://twitter.com/hass_devs via IFTTT
0 notes
Text
CLARIFY BERT
BERT, which is short for Bidirectional Encoder Representations from Transformers, is a machine learning (ML) framework for natural language processing. In 2018, Google developed this algorithm to improve contextual understanding of unlabeled text across a broad range of tasks by learning to predict text that might come before and after (bi-directional) other text.
BERT is used for a wide variety of language tasks. Below are examples of what the framework can help you do:
Determine if a movie’s reviews are positive or negative
Help chatbots answer questions
Help predict text when writing an email
Can quickly summarize long legal contracts
Differentiate words that have multiple meanings based on the surrounding text.
BERT converts words into numbers. This process is important because machine learning models use numbers, not words, as inputs. This allows you to train machine learning models on your textual data. That is, BERT models are used to transform your text data to then be used with other types of data for making predictions in a machine learning model.
To use BERT in Python, you can leverage the Hugging Face's Transformers library, which provides an efficient API for utilizing and fine-tuning BERT models. The following sample code demonstrates how to use BERT for text classification:
```python
from transformers import BertTokenizer, BertForSequenceClassification
import torch
# Load pre-trained model and tokenizer
model_name = 'bert-base-uncased'
tokenizer = BertTokenizer.from_pretrained(model_name)
model = BertForSequenceClassification.from_pretrained(model_name)
# Encode input text
input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt", padding=True, truncation=True)
# Perform classification
outputs = model(**inputs)
predictions = torch.argmax(outputs.logits, dim=1)
# Get the predicted label
predicted_label = predictions[0].item()
print(predicted_label)
```
This code snippet uses the BERT model and tokenizer from Hugging Face's Transformers library to perform text classification on an input text. It showcases the ease of using pre-trained BERT models for specific NLP tasks in Python.
BERT's impact on natural language understanding and its wide range of applications make it a significant advancement in the field of machine learning and natural language processing.
RDIDINI
PROMPT ENGINEER
0 notes
Text
LARGE LANGUAGE MODELS TO REVOLUTIONIZING INDUSTRIES IN 2024.
Large Language Models (LLMs) are revolutionizing various industries in 2024. These models, such as GPT-4 and ChatGPT, are based on Transformer-based neural networks and have billions of parameters, allowing them to understand and process human language with unprecedented accuracy and complexity.
They are being used in diverse fields, including customer service chatbots, conversational AI systems, healthcare, finance, education, and entertainment. LLMs are also opening new possibilities for businesses, such as improving customer engagement and support services.
One practical use of LLMs is in the development of open-source models like LLaMA 2, or similar, which have 7 to 70 billion parameters and are designed for research and commercial use.
These open-source LLMs offer advantages such as heightened data security, cost-effectiveness, privacy, and community collaboration. Additionally, the future of AI and LLMs holds the potential emergence of small language models (SLMs), which can be tailored to specific domains or tasks, leading to improved performance and reduced training time.
```python
# Sample Python code for using a pre-trained large language model (GPT-4)
from transformers import GPT2LMHeadModel, GPT2Tokenizer
# Load pre-trained model and tokenizer
model = GPT2LMHeadModel.from_pretrained('gpt-4')
tokenizer = GPT2Tokenizer.from_pretrained('gpt-4')
# Encode input text
input_text = "The practical use of large language models in 2024 is revolutionizing"
input_ids = tokenizer.encode(input_text, return_tensors='pt')
# Generate output based on the input
output = model.generate(input_ids, max_length=100, num_return_sequences=1, no_repeat_ngram_size=2, top_k=50)
# Decode and print the generated text
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
Finally, large language models are being used to transform industries and provide innovative solutions in 2024, and their practical applications are diverse and far-reaching.
RDIDINI PROMPT ENGINEER
0 notes
Text
Expanding Horizons: AI's Diverse Applications in Daily Tasks.




Generative AI typically provides solutions for writing, reading, and chatting, yet its scope extends far beyond. Continuously progressing in areas like images, predictions, voice and image recognition, data analysis, graphs, statistics, maps, and more, it addresses a wide range of tasks that align with human intelligence. This evolution positions AI as a dynamic tool enhancing everyday activities.
Here's a small snippet of Python code that provides a prompt using the `Gradio` library:
```python
import gradio as gr
def generate_prompt():
return "Generative AI offers potential solutions for writing, reading, chatting, images, predictions, voice and image recognition, data analysis, graphs, statistics, maps, etc."
iface = gr.Interface(generate_prompt, "text")
iface.launch()
```
This code uses the `Gradio` library to create a simple web interface that displays the prompt when the code is run. The user can then interact with the prompt as needed.
Here's a small snippet of Python code to provide a RAG (Retrieval-Augmented Generation) using the `transformers` library:
```python
from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration
import torch
# Initialize the RAG tokenizer and retriever
tokenizer = RagTokenizer.from_pretrained("facebook/rag-token-base")
retriever = RagRetriever.from_pretrained("facebook/rag-token-base")
# Provide the input text
input_text = "Generative AI offers potential solutions for writing, reading, and chatting, but it goes far beyond that..."
# Encode the input text
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
# Initialize the RAG model for generation
model = RagTokenForGeneration.from_pretrained("facebook/rag-token-base")
# Generate the RAG output
output = model.generate(input_ids)
# Decode and print the generated text
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
This code uses the `transformers` library to initialize a RAG tokenizer, retriever, and model, then generates text based on the provided input. The RAG model leverages retriever to retrieve relevant passages from a knowledge source and then generate the text. This snippet provides a basic example of how to use RAG for text generation in Python.
To perform fine-tuning of a pre-trained model for generative AI applications in Python, you can use the following code as a starting point:
```python
from transformers import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config, TextDataset, DataCollatorForLanguageModeling, Trainer, TrainingArguments
# Load pre-trained model and tokenizer
model_name = "gpt2" # or any other pre-trained model
model = GPT2LMHeadModel.from_pretrained(model_name)
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
# Define your custom dataset and data collator
dataset = TextDataset(tokenizer=tokenizer, file_path="your_custom_dataset.txt", block_size=128)
data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=False)
# Define the training arguments
training_args = TrainingArguments(
output_dir="./fine_tuned_model",
overwrite_output_dir=True,
num_train_epochs=3,
per_device_train_batch_size=8,
save_steps=10_000,
save_total_limit=2,
)
# Create a Trainer and start the fine-tuning
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset,
)
trainer.train()
```
This code snippet uses the Hugging Face `transformers` library to fine-tune a pre-trained GPT-2 model on a custom dataset for language generation. It involves loading the pre-trained model and tokenizer, defining the custom dataset and data collator, setting the training arguments, and then initiating the fine-tuning process.
Fine-tuning pre-trained models is a reliable technique for creating high-performing generative AI applications. It involves updating pre-trained models with new information or data to customize them to a particular use case.
RDIDINI PROMPT ENGINEER
0 notes
Video
Input Text With Border | HTML CSS | Sekhon Design & Code
#input#text#border#input_text#input_border#input_text_border#html#html_input#html_text#html_border#html_input_text#html_input_border#html_input_text_border#css#css_input#css_text#css_border#css_input_text#css_input_border#css_input_text_border#html_css#html_css_input#html_css_text#html_css_border#html_css_input_text#html_css_input_border#html_css_input_text_border#sekhon#design#code
0 notes
Video
Input Text With Icon Slidein | HTML CSS | Sekhon Design & Code
#input#text#icon#input_text#input_icon#input_text_icon#html#html_input#html_text#html_icon#html_input_text#html_input_icon#html_input_text_icon#css#css_input#css_text#css_icon#css_input_text#css_input_icon#css_input_text_icon#html_css#html_css_input#html_css_text#html_css_icon#html_css_input_text#html_css_input_icon#html_css_input_text_icon#sekhon#design#code
0 notes
Video
Input Text With Icon | HTML CSS | Sekhon Design & Code
#input#text#icon#input_text#input_icon#input_text_icon#html#html_input#html_text#html_icon#html_input_text#html_input_icon#html_input_text_icon#css#css_input#css_text#css_icon#css_input_text#css_input_icon#css_input_text_icon#html_css#html_css_input#html_css_text#html_css_icon#html_css_input_text#html_css_input_icon#html_css_input_text_icon#sekhon#design#code
0 notes
Video
Input Text With Label Slideout | HTML CSS | Sekhon Design & Code
#input#text#label#input_text#input_label#input_text_label#html#html_input#html_text#html_label#html_input_text#html_input_label#html_input_text_label#css#css_input#css_text#css_label#css_input_text#css_input_label#css_input_text_label#html_css#html_css_input#html_css_text#html_css_label#html_css_input_text#html_css_input_label#html_css_input_text_label#sekhon#design#code
0 notes
Video
Input Text With Label Move Down | HTML CSS | Sekhon Design & Code
#input#text#label#input_text#input_label#input_text_label#html#html_input#html_text#html_label#html_input_text#html_input_label#html_input_text_label#css#css_input#css_text#css_label#css_input_text#css_input_label#css_input_text_label#html_css#html_css_input#html_css_text#html_css_label#html_css_input_text#html_css_input_label#html_css_input_text_label#sekhon#design#code
0 notes
Video
Input Text With Fixed Label | HTML CSS | Sekhon Design & Code
#input#text#label#input_text#input_label#input_text_label#html#html_input#html_text#html_label#html_input_text#html_input_label#html_input_text_label#css#css_input#css_text#css_label#css_input_text#css_input_label#css_input_text_label#html_css#html_css_input#html_css_text#html_css_label#html_css_input_text#html_css_input_label#html_css_input_text_label#sekhon#design#code
0 notes