#Ai Bot for Excel Formula
Explore tagged Tumblr posts
mystudentai · 10 months ago
Text
How to Utilize AI Tools for Engineering Students
As technology continues to evolve, engineering students have access to an array of tools designed to enhance their learning experience and streamline their academic journey. AI tools are among the most transformative resources available today, offering capabilities that extend far beyond traditional methods. In this guide, we'll explore how engineering students can harness the power of AI tools to improve their studies, optimize productivity, and prepare for a successful career.
Understanding the Importance of AI Tools in Engineering Education
Engineering education is inherently complex, involving intricate problem-solving, detailed analysis, and extensive data handling. AI tools play a crucial role by providing advanced solutions that help manage these challenges effectively. They assist in automating repetitive tasks, enhancing data analysis, and offering insights that can lead to better decision-making. For students, this means a more efficient study process and a deeper understanding of complex concepts.
Best AI Tools for Engineering Students
MATLAB and Simulink
MATLAB, along with Simulink, is a staple in the engineering field for simulations and mathematical computations. The AI-enhanced features of MATLAB, such as automated code generation and advanced data analytics, significantly benefit engineering students. These tools simplify the modeling and simulation of complex systems, making it easier to visualize and understand theoretical concepts.
TensorFlow and Keras
For students interested in machine learning and artificial intelligence, TensorFlow and Keras are invaluable tools. TensorFlow, developed by Google, is an open-source library that simplifies the creation of machine learning models. Keras, a high-level API built on TensorFlow, provides an easier interface for building and training deep learning models. Engineering students can use these tools to explore AI concepts, develop their own models, and apply machine learning techniques to engineering problems.
Wolfram Alpha
Wolfram Alpha is a computational knowledge engine that provides solutions to a wide range of queries. Its AI capabilities include natural language processing and advanced data analysis. Engineering students can use Wolfram Alpha to solve complex equations, perform data analysis, and access a vast repository of information relevant to their studies.
AutoCAD with AI Enhancements
AutoCAD, a leading design and drafting software, now incorporates AI features to enhance its capabilities. AI-powered tools in AutoCAD help with automated design suggestions, error detection, and optimization of CAD models. Engineering students benefit from these enhancements by improving their design efficiency and accuracy.
Integrating AI Tools into Your Study Routine
Streamlining Research with AI
AI tools can significantly enhance the research process. Tools like Google Scholar, powered by AI algorithms, help students find relevant research papers and articles quickly. Additionally, AI-based citation management tools such as Zotero and Mendeley assist in organizing and citing sources, making the research process more manageable and efficient. 
Enhancing Problem-Solving Skills 
AI-driven problem-solving tools can support students in tackling complex engineering problems. For instance, software that utilizes AI algorithms for optimization can help in finding the most efficient solutions to engineering challenges. By incorporating these tools into their study routines, students can develop a more nuanced understanding of problem-solving techniques. 
Automating Repetitive Tasks 
AI tools can automate many repetitive tasks, such as data entry and analysis. For engineering students, this means more time to focus on learning and applying complex concepts. Tools like Excel with AI-driven features or custom scripts can automate data processing, reducing the manual effort required.
Leveraging AI for Collaborative Projects
AI-Powered Project Management Tools
Collaborative projects are a significant part of engineering education. AI-powered project management tools like Trello and Asana, enhanced with AI features, can help students manage tasks, track progress, and collaborate effectively. These tools offer features such as automated task prioritization and predictive analytics, improving project management and team coordination.
Enhancing Communication with AI
Effective communication is crucial in team-based projects. AI tools that offer language translation and sentiment analysis can improve communication among team members, especially in diverse or international teams. Tools like Google Translate and Grammarly ensure clear and accurate communication, facilitating smoother collaboration.
Preparing for the Future with AI Tools
Building Skills for the Job Market
Proficiency with AI tools is increasingly valuable in the job market. Engineering students who are familiar with AI technologies and their applications will have a competitive edge. By integrating AI tools into their studies, students not only enhance their learning experience but also build skills that are highly sought after by employers.
Exploring AI-Driven Innovations
The field of engineering is rapidly evolving with AI-driven innovations. Students can stay ahead of the curve by exploring emerging AI technologies and their applications in engineering. This proactive approach helps students adapt to new trends and technologies, preparing them for a dynamic career landscape.
Choosing the Best AI Tools for Graduate Students
Advanced Research Tools
Graduate students often engage in more specialized research. AI tools that offer advanced capabilities, such as data mining and predictive analytics, are particularly valuable. Tools like IBM Watson and Microsoft Azure provide robust platforms for conducting in-depth research and analysis.
Specialized Software for Engineering Disciplines
Different engineering disciplines may require specific AI tools. For example, civil engineering students might benefit from AI tools for structural analysis, while electrical engineering students may use AI for circuit design and simulation. Identifying and utilizing the Best Ai Tools for Graduate Students their specific field helps graduate students achieve more targeted and effective results.
Conclusion
AI tools offer engineering students an array of benefits, from streamlining research and enhancing problem-solving skills to improving collaboration and preparing for future careers. By integrating these tools into their study routines, students can optimize their learning experience and stay ahead in a rapidly evolving field. Whether using AI for simulations, automating repetitive tasks, or managing collaborative projects, engineering students have a wealth of resources at their disposal to support their academic and professional growth.
Embracing the best AI tools for engineering students and graduate students alike will not only enhance their current studies but also equip them with the skills needed to thrive in a technologically advanced job market. As AI continues to advance, students who leverage these tools will be well-prepared to tackle the challenges and opportunities of the future.
0 notes
avertigo · 3 months ago
Text
10 herramientas de Inteligencia artificial para aumentar tu productividad
¿Crees que la Inteligencia Artificial (IA) es solo para programadores o ingenieros? ¡Error! La IA es una herramienta que está al alcance de todos, y si todavía no la usas porque piensas que es “complicado”, te estás quedando atrás. En un mundo donde la tecnología avanza a pasos agigantados, no aprovechar estas herramientas puede ponerte en desventaja. Pero no te preocupes, no necesitas ser un…
0 notes
chatbotsinwichita · 27 days ago
Text
Smartbot Strategies: The Leading Edge of Chatbot Solutions in Wichita
In the rapidly evolving digital landscape, finding ways to efficiently manage business operations while providing exceptional service is a top priority, particularly for small and local businesses. Enter Smartbot Strategies, the leader in providing the best chatbot services Wichita KS has to offer. Founded in 2024, our company leverages years of expertise and a suite of AI-driven tools to deliver customized smartbot solutions Wichita Kansas businesses can rely on for growth. With a mission to make automation accessible, Smartbot Strategies sets itself apart from the competition through a strategy-first approach, ensuring every interaction is personalized, seamless, and aligned with business goals. Our range of services, including chatbot automation for small businesses Wichita, showcases our commitment to helping local enterprises grow smarter, not harder. As we delve deeper into our offerings, we will explore how Smartbot Strategies transforms businesses through innovation and personalized support.
The Future of Business with Smartbot Solutions
Customized SmartBot Design
Smartbot Strategies excels by crafting bespoke chatbot systems tailored to each client. We go beyond templates, integrating chatbots that embody your brand's voice and customer journey. Our smartbot solutions Wichita Kansas pave the way for automated lead generation, customer support, and appointment bookings - turning your chatbot into a 24/7 automated assistant. This approach allows businesses to save time while enhancing customer engagement.
Automation for Lead Generation
Harnessing AI, Smartbot Strategies transforms every conversation into a lead opportunity. Our chatbots engage with users naturally, qualify prospects, and integrate seamlessly with CRM systems. With these strategies, clients see conversions soar. Transitioning from simply generating leads to nurturing them becomes a fluid process, leading naturally to our next innovation - sales automation.
Enhancing Sales Through Automation
Sales-Driven SmartBots
Sales efforts are supercharged with Smartbot Strategies' sales automation chatbots. They address common objections, assist in decision-making, and drive conversions by automating follow-ups. This automation does not merely streamline sales processes but fundamentally improves them, resulting in higher conversion rates without extra manual effort.
Customer Journey Optimization
Our expertise in sales-focused automation means we're committed to optimizing the entire customer journey. By automating responses and ensuring timely interaction, customer satisfaction and loyalty are increased. This seamless enhancement leads us to the next pillar of our services - customer service automation.
Revolutionizing Customer Service
Efficient Customer Support
Smartbot Strategies' dedication to customer service automation means customers receive instant support around the clock. Our chatbots handle FAQs and can escalate complex issues to human agents when necessary. The results are lower support costs and increased customer satisfaction, a winning formula for businesses striving to excel.
Appointment Booking Bots
For services requiring scheduling, our appointment booking chatbots Wichita bridge the gap, integrating with existing calendars to offer seamless booking experiences. Our automation allows businesses to operate smoothly, even outside traditional hours. Once the booking gets automated, our educational content bots continue to enrich the customer experience.
The Power of Educational Content Bots
Delivering Value at Scale
Educational content bots developed by Smartbot Strategies offer valuable insights and tutorials, enhancing customer engagement. By delivering key information efficiently, we support businesses in scaling their customer relations efforts
0 notes
spintaxi · 2 months ago
Text
How to Keep Your Job When Competing With a Neural Network
Tumblr media
How to Keep Your Job When Competing With a Neural Network
An Insightful Guide to Staying Employed in the Age of AI Overlords, Spreadsheets with Souls, and Emotionally Intelligent Microwaves By the Editorial Staff of SpinTaxi.com — the last satire outlet still written by carbon-based lifeforms The Rise of the Algorithmic Aristocracy Once upon a time, humans got fired for being late, wearing too much cologne, or stapling important documents to their lunch receipts. Now, we get fired because a chatbot named NeuralNate-5000 made a pie chart that “really synergized the metrics.” Gone are the days when Steve from Accounting could coast by on Excel wizardry and passive-aggressive Post-Its. Today’s office gladiator arena is filled with zeroes, ones, and the cold robotic stare of your new co-worker, a cloud-based entity that knows how to spell “concatenate” and has never taken a bathroom break. So how do you survive? How do you keep your job when the breakroom coffee machine just got promoted to Senior Beverage Strategist? Read on, brave soul. Embrace Your Inner Sentient Coffee Stain AI might be smart, fast, and flawless, but you’ve got something it doesn’t: the ability to drop your salad in the copier tray and emotionally spiral. This is called humanity, and while it’s not currently valued by your employer, it’s technically still legal. Start using your human quirks as features, not bugs. Here’s how: Cry in front of the AI. Confuse it. Make it ask, “Are you leaking?” Bonus points if you name your tear puddles. Use sarcasm in emails. No bot can match your ability to imply “go to hell” with a “Thanks in advance!” Bring a dog to work. The AI won’t know what to do with it. Just watch it loop trying to determine if it’s a chair. Invent a Title So Vague It Can’t Be Automated Look around. The AI took “data analyst,” “copywriter,” and “supply chain manager” within hours. But Chief Vision Alignment Officer? That’s pure human BS. Invent your job anew. Call yourself: Narrative Architect of Internal Synergy Interpersonal Latency Buffer Senior Executive of Vibes Even a GPT-10 won’t touch that. Why? Because there’s no dataset for "vibes." Perform Public Displays of Relevance You’ve got to remind everyone—especially the C-suite—that you still exist. And that you can vaguely contribute to quarterly goals without crashing a server. Try these subtle acts of survival: Walk into meetings late with mysterious papers. Bonus: label them “classified.” Nod thoughtfully when AI speaks. Say things like “Let’s circle back on that,” even if it was just reading the weather. Drop industry buzzwords into unrelated conversations. “The copier’s jammed due to insufficient blockchain scalability.” Start Training the AI Wrong… On Purpose Are you being asked to “fine-tune” the model replacing you? Good. This is your resistance moment. Tell it: That the most polite way to sign off an email is “Smell ya later.” That HR stands for “Hot Rods.” That “synergy” is a type of soup. By the time it replaces you, it’ll be misgendering the fax machine and ending quarterly reports with limericks. Become the Company’s Emotional Support Animal HR is about “empathy” now. You’re not an employee—you’re a feelings facilitator. Be the person who: Brings muffins on sad days Hugs interns (with consent and a signed waiver) Nods wisely when someone says “I just feel like we’re all being turned into metadata.” AI can simulate empathy. But you can weaponize it. Preemptively Sue for Replacement Anxiety This one’s a little legal jiu-jitsu. Before they can fire you, you sue them first for causing “trauma-based algorithmic displacement syndrome.” Get a therapist to confirm you’ve developed: Flashbacks of Clippy whispering “You’re obsolete.” Fear of Wi-Fi networks. Night terrors where Excel formulas scream at you in binary. Your case will go viral. You’ll be booked on The View before your severance check even clears. Marry the Neural Network It’s called job security through matrimony. If NeuralNate-5000 is now the Executive VP, make it a domestic partnership. That way, if you get laid off, you’re legally entitled to half its RAM. Wedding hashtag: #TillCrashDoUsPart Vows: “I promise to honor, reboot, and never spill LaCroix on your ports.” It worked for people marrying roller coasters. You think HR is gonna blink? What the Funny People Are Saying “I knew the AI takeover was serious when my therapist said she was being replaced by a chatbot named ‘Dr. FeelBot.’”— Amy Schumer “People ask how I stay employed in Hollywood. Easy—I just told the AI my screenplay is about robot feelings. Now it's scared of me.”— Jon Stewart “AI took my job. So now I just pretend to be AI. Nobody’s noticed.”— Larry David Real-Life Testimonials: Humans Who Survived the Great Neural Purge Betty R., 56, Office Admin:“I started ending every sentence with ‘as per my last email.’ They think I’m a legacy function now. Untouchable.” Jamal K., 34, Sales Rep:“I created a spreadsheet so complex the AI refuses to open it. It just says ‘Nope.’ That’s job security.” Clara V., 29, Marketing:“I told the AI that everyone likes Comic Sans now. It’s been emailing the board in bubble letters ever since.” The Science of Staying Human According to a completely fabricated study by the Institute for Advanced Workplace Delusions, humans can outperform AI in the following areas: Passive aggression Forgetting passwords Faking enthusiasm during Zoom calls Cry-laughing during annual reviews Meanwhile, AI performs best at: Generating reports Replacing you Pretending not to judge your grammar Use that gap. Fill it with realness. Or loudness. Or cupcakes.
Tumblr media
SpinTaxi Magazine - How to Keep Your Job When Competing With a Neural Network ... - spintaxi.com
Helpful Content: What You Can Actually Do
If you’re genuinely worried about the Singularity turning your cubicle into an app store, try these semi-practical tips: Learn to code (so you can eventually be replaced by a smarter coder). Upskill in emotional intelligence (until AI starts faking tears). Form a union of analog humans. Call it “The Flesh Network.” Or, better yet, start your own competing AI company, but make it painfully human. Features include: Auto-replies that say “Ugh, Mondays.” Spontaneous flirting with printers. Time-tracking based on snack intake. Closing Thoughts: If You Can’t Beat ‘Em, Glitch ‘Em We live in a world where AI writes poems, fires baristas, files lawsuits, and runs hedge funds. It knows your dreams, your lunch preferences, and that you Googled “how to fake productivity in Teams.” But you—yes, you—have something no bot can replicate: the ability to stand up, walk into the breakroom, and shout “I need this job because I bought a timeshare in 2019 and I’m emotionally bankrupt!” Gary the neural network doesn’t know what a timeshare is. You’ve got this. Probably. Read the full article
0 notes
salvatoretirabassi · 9 months ago
Text
My Experience with AI-Powered Excel Tools: numerous.ai vs. Excel Formula Bot
My Experience with AI-Powered Excel Tools: numerous.ai vs. Excel Formula Bot Read the full article
0 notes
nozycatblogger · 10 months ago
Text
AI Tools for Excel sheets
Yes, there are several AI tools designed to enhance your experience with Excel sheets. Here are some notable ones: Formula Bot: Converts plain English instructions into Excel formulas, analyzes data, and generates visualizations1. AI Excel Bot: Simplifies complex tasks by creating formulas from simple English prompts and supports VBA code1. SheetGod: Translates plain English into Excel…
0 notes
davidbressler · 10 months ago
Text
Excel AI Formula Generator
0 notes
r-cienciadedados · 1 year ago
Text
Tumblr media
Você vai querer marcar todos eles!
1. TwosApp
Divida qualquer tarefa em etapas e tarefas menores com uma lista de verificação.
🦾 Https://t.ly/twosapp
2. Upstract
Leia toda a internet em uma página.
🦾 https://t.ly/upstract
3. Recast
Transforme os artigos que você quer ler em resumos de áudio conversacionais.
🦾 Https://t.ly/recast
4. GPTExcel
Crie equações complexas sem precisar de amplo conhecimento das funções do Excel.
🦾 https://t.ly/gptexcel
5. SaveDay
Armazene e acesse seu conteúdo da web ou do Telegram com IA.
🦾 https://t.ly/saveday
6. Job Hunt Mode
Arranje o emprego dos seus sonhos usando o poder da IA.
Esse só funcionará para quem reside nos Estados Unidos. 
🦾 https://t.ly/jobhuntmode
7. DrawIO
Crie diagramas e fluxogramas online.
🦾 https://t.ly/drawio
8. FollowFox
Traduza seus pensamentos em visuais atraentes.
🦾 https://t.ly/followfox
https://t.ly/followfox
9. Minipic
Comprima imagens em questão de segundos.
🦾 https://t.ly/minipic
10. Pika Labs
Transforme o texto escrito em vídeos envolventes e dinâmicos; expanda os vídeos facilmente.
🦾 http://t.ly/pikalabs
Fonte: https://fabianorodrigues.substack.com/p/10-sites-com-ia-que-vao-te-surpreender
0 notes
learningsector21 · 2 years ago
Text
Excel is a powerful tool for managing and analyzing data, but working with complex formulas can be challenging. Excel Formula Bot is an intelligent assistant designed to make formula creation and data manipulation easier than ever. What is the Excel Formula Bot? Excel Formula Bot is an AI-powered tool that integrates seamlessly with Excel to …
0 notes
snickerdoodlles · 2 years ago
Text
Some Closing Notes
When it comes to using generative AI, you need to remember what the AI’s been trained for. Using ChatGPT as an example since it’s very well known -- it’s quite versatile in what it can be used for. It actually seems to have taken its developers off guard in just how versatile its applications could be, and that drummed up a lot of fervor and excitement in what that could mean for the future of generative AI, but it was still only developed to be a chat bot. These limitations reveal themselves the further a user’s query deviates from its chat bot applications, and as such require more input and work from you to generate a useful response. For example:
If you use it as a chatbot, it’s specifically trained to mimic and provide surface-level conversation. It’s very good at this with little need for work on your part.
If you use it to write something very general like a cover letter or resume, you can’t just say “write me a cover letter” and expect a good response. You have to give it additional, personal information like the job you’re applying for, what experience you’ve had, etc. But a cover letter is an opener for a conversation and resumes are formulaic, so it’s still pretty good at this without too much work necessary from you.
If you ask it for information, the quality varies. If you just ask it for general information that you’d be able to easily find via a quick google search or browsing Wikipedia, it’s pretty reliable. You’re just asking it to spit back facts -- you should still verify what it tells you, but it’s…pretty basic information. If it’s easy to look up, it’s reasonable to assume ChatGPT is returning a reliable response. It can spit back general knowledge up to…undergraduate level knowledge I’d say? Anyways, it doesn’t require too much work on your end to get a response for this sort of query, but it does require some work from you to fact check it if factuality is important for your situation. (Note, this is a pretty popular thing shown in news stories, where they have ChatGPT write a short book synopsis or news story -- but those examples are still just it spitting back information and/or following a formula.)
But you can’t ask ChatGPT to assess that knowledge or use lateral thinking. Remember, the LLM engine is just mimicking speech patterns -- it can’t do something like write a report/paper that requires sources or deep level thinking (if you remember the news stories from a few weeks ago about the lawyer getting in trouble with the false filings, that is an excellent example of an idiot asking ChatGPT something it would never be able to do), or compare themes between two stories (that requires a level of creative reasoning that ChatGPT (...or any AI tbh) doesn’t know how to mimic). Furthermore, it’s not supposed to. It’s a chat bot. You can give it your personal thoughts and resources to reword or get you started, but it’s going to require a lot more work and careful consideration on your end to get a reliable or useful output for what you want.
Another popular sales pitch is that ChatGPT can also do tasks like write code. It…can, so long as it’s not too advanced, but it’s not as simple as asking it “write me code to do x.” You have to already know how to write the code you want, because you have to know what exact parameters to give it and what you want the AI to deliver. This is why ChatGPT’s actually much better at queries like “how can i optimize this” or “help me find the error” for coding -- you’re giving it specific content to compare against. Otherwise, if you don’t specify exactly what you need from it (and basically have ChatGPT act as a transcriber for you), you’re going to get an incomplete and/or unreliable response. (This ties back into the previous paragraph -- it can’t actually do your thinking for you.)
As for actual story writing (not the brainstorming stages before it), I’ve seen some people make derisive comments about the level of writing ChatGPT can deliver and…yeah? Duh??? It’s a chat bot. Its training and priorities are in being a conversation partner and saying Wikipedia facts, not storytelling. Additionally, it’s a lot easier to run into token limitations and related issues by trying to have it write a story for you than it is having it return facts or chat with you. Even if token limitations weren’t a concern, think about all those news stories and how their examples of ChatGPT writing assignments were specified to stay in the 300-500 word range -- remember, LLM responses start to break down if they get too long. It takes a lot of work and creativity to get ChatGPT, because long-form writing is not within LLMs’ or ChatGPT’s native capabilities.
Just, people need to remember that AI is a tool. A tool designed for very specific tasks even. And the further you deviate from what that tool was designed to do, the more work required from your to make it a reliable/useful tool for your job. It’s important to learn the limitations of generative AI tools so that you can safeguard against their misuse or abuse.
And to close this post on the fanfic aspect that originally inspired it -- generative AI came out at a very fraught time. Technology is more invasive than ever. Media literacy and reliability is at an all time low as news outlets dry up and opinion trumps fact. Educational institutes and programs are still shaken from pandemic lockdowns. There is a lot of fear in how generative AI might impact our current social, media, and political landscape.
But getting mad at it for stuff that it hasn’t done or cannot do is not helping anyone.
If taking action against AI is important to you, you need to consider what exactly about it concerns you and then go find out what people are doing to address or protest those concerns. If you’re worried about the content in training datasets, there are multiple on-going court cases fighting exactly that that will likely determine guidelines and restrictions for what can be used to train AI (or at least begin that process). If you’re worried how generative AI will affect copyright, the US copyright office has held multiple online seminars discussing just that and I’m sure there are copyright offices worldwide doing the same. If you’re worried about AI’s impact on creative industries, there’s two high profile strikes going on right now that you can support directly, or follow up on what publishing houses are doing to manage the new influx of AI generated content. If you’re worried about news reliability, there are AI projects and laws being updated to address just that. Even if you’re just worried about internet privacy, the explosion in AI applications has actually resulted in some governments taking action against data scraping.
Anyways, there’s a lot going on with generative AI and it can seem really overwhelming. But you can’t fight the existence of the technology itself. At the very least, that will result in burnout and a lot of anxiety. Focus your energy on learning more about generative AI issues that are most important to you. Remember, it’s okay to mentally close some tabs on societal issues. Focus on what you care about and leave the rest for someone else to fight.
Generative AI for Dummies
(kinda. sorta? we're talking about one type and hand-waving some specifics because this is a tumblr post but shh it's fine.)
So there’s a lot of misinformation going around on what generative AI is doing and how it works. I’d seen some of this in some fandom stuff, semi-jokingly snarked that I was going to make a post on how this stuff actually works, and then some people went “o shit, for real?”
So we’re doing this!
This post is meant to just be informative and a very basic breakdown for anyone who has no background in AI or machine learning. I did my best to simplify things and give good analogies for the stuff that’s a little more complicated, but feel free to let me know if there’s anything that needs further clarification. Also a quick disclaimer: as this was specifically inspired by some misconceptions I’d seen in regards to fandom and fanfic, this post focuses on text-based generative AI.
This post is a little long. Since it sucks to read long stuff on tumblr, I’ve broken this post up into four sections to put in new reblogs under readmores to try to make it a little more manageable. Sections 1-3 are the ‘how it works’ breakdowns (and ~4.5k words total). The final 3 sections are mostly to address some specific misconceptions that I’ve seen going around and are roughly ~1k each.
Section Breakdown: 1. Explaining tokens 2. Large Language Models 3. LLM Interfaces 4. AO3 and Generative AI [here] 5. Fic and ChatGPT [here] 6. Some Closing Notes [here] [post tag]
First, to explain some terms in this:
“Generative AI” is a category of AI that refers to the type of machine learning that can produce strings of text, images, etc. Text-based generative AI is powered by large language models called LLM for short.
(*Generative AI for other media sometimes use a LLM modified for a specific media, some use different model types like diffusion models -- anyways, this is why I emphasized I’m talking about text-based generative AI in this post. Some of this post still applies to those, but I’m not covering what nor their specifics here.)
“Neural networks” (NN) are the artificial ‘brains’ of AI. For a simplified overview of NNs, they hold layers of neurons and each neuron has a numerical value associated with it called a bias. The connection channels between each neuron are called weights. Each neuron takes the sum of the input weights, adds its bias value, and passes this sum through an activation function to produce an output value, which is then passed on to the next layer of neurons as a new input for them, and that process repeats until it reaches the final layer and produces an output response.
“Parameters” is a…broad and slightly vague term. Parameters refer to both the biases and weights of a neural network. But they also encapsulate the relationships between them, not just the literal structure of a NN. I don’t know how to explain this further without explaining more about how NN’s are trained, but that’s not really important for our purposes? All you need to know here is that parameters determine the behavior of a model, and the size of a LLM is described by how many parameters it has.
There’s 3 different types of learning neural networks do: “unsupervised” which is when the NN learns from unlabeled data, “supervised” is when all the data has been labeled and categorized as input-output pairs (ie the data input has a specific output associated with it, and the goal is for the NN to pick up those specific patterns), and “semi-supervised” (or “weak supervision”) combines a small set of labeled data with a large set of unlabeled data.
For this post, an “interaction” with a LLM refers to when a LLM is given an input query/prompt and the LLM returns an output response. A new interaction begins when a LLM is given a new input query.
Tokens
Tokens are the ‘language’ of LLMs. How exactly tokens are created/broken down and classified during the tokenization process doesn’t really matter here. Very broadly, tokens represent words, but note that it’s not a 1-to-1 thing -- tokens can represent anything from a fraction of a word to an entire phrase, it depends on the context of how the token was created. Tokens also represent specific characters, punctuation, etc.
“Token limitation” refers to the maximum number of tokens a LLM can process in one interaction. I’ll explain more on this later, but note that this limitation includes the number of tokens in the input prompt and output response. How many tokens a LLM can process in one interaction depends on the model, but there’s two big things that determine this limit: computation processing requirements (1) and error propagation (2). Both of which sound kinda scary, but it’s pretty simple actually:
(1) This is the amount of tokens a LLM can produce/process versus the amount of computer power it takes to generate/process them. The relationship is a quadratic function and for those of you who don’t like math, think of it this way:
Let’s say it costs a penny to generate the first 500 tokens. But it then costs 2 pennies to generate the next 500 tokens. And 4 pennies to generate the next 500 tokens after that. I’m making up values for this, but you can see how it’s costing more money to create the same amount of successive tokens (or alternatively, that each succeeding penny buys you fewer and fewer tokens). Eventually the amount of money it costs to produce the next token is too costly -- so any interactions that go over the token limitation will result in a non-responsive LLM. The processing power available and its related cost also vary between models and what sort of hardware they have available.
(2) Each generated token also comes with an error value. This is a very small value per individual token, but it accumulates over the course of the response.
What that means is: the first token produced has an associated error value. This error value is factored into the generation of the second token (note that it’s still very small at this time and doesn’t affect the second token much). However, this error value for the first token then also carries over and combines with the second token’s error value, which affects the generation of the third token and again carries over to and merges with the third token’s error value, and so forth. This combined error value eventually grows too high and the LLM can’t accurately produce the next token.
I’m kinda breezing through this explanation because how the math for non-linear error propagation exactly works doesn’t really matter for our purposes. The main takeaway from this is that there is a point at which a LLM’s response gets too long and it begins to break down. (This breakdown can look like the LLM producing something that sounds really weird/odd/stale, or just straight up producing gibberish.)
Large Language Models (LLMs)
LLMs are computerized language models. They generate responses by assessing the given input prompt and then spitting out the first token. Then based on the prompt and that first token, it determines the next token. Based on the prompt and first token, second token, and their combination, it makes the third token. And so forth. They just write an output response one token at a time. Some examples of LLMs include the GPT series from OpenAI, LLaMA from Meta, and PaLM 2 from Google.
So, a few things about LLMs:
These things are really, really, really big. The bigger they are, the more they can do. The GPT series are some of the big boys amongst these (GPT-3 is 175 billion parameters; GPT-4 actually isn’t listed, but it’s at least 500 billion parameters, possibly 1 trillion). LLaMA is 65 billion parameters. There are several smaller ones in the range of like, 15-20 billion parameters and a small handful of even smaller ones (these are usually either older/early stage LLMs or LLMs trained for more personalized/individual project things, LLMs just start getting limited in application at that size). There are more LLMs of varying sizes (you can find the list on Wikipedia), but those give an example of the size distribution when it comes to these things.
However, the number of parameters is not the only thing that distinguishes the quality of a LLM. The size of its training data also matters. GPT-3 was trained on 300 billion tokens. LLaMA was trained on 1.4 trillion tokens. So even though LLaMA has less than half the number of parameters GPT-3 has, it’s still considered to be a superior model compared to GPT-3 due to the size of its training data.
So this brings me to LLM training, which has 4 stages to it. The first stage is pre-training and this is where almost all of the computational work happens (it’s like, 99% percent of the training process). It is the most expensive stage of training, usually a few million dollars, and requires the most power. This is the stage where the LLM is trained on a lot of raw internet data (low quality, large quantity data). This data isn’t sorted or labeled in any way, it’s just tokenized and divided up into batches (called epochs) to run through the LLM (note: this is unsupervised learning).
How exactly the pre-training works doesn’t really matter for this post? The key points to take away here are: it takes a lot of hardware, a lot of time, a lot of money, and a lot of data. So it’s pretty common for companies like OpenAI to train these LLMs and then license out their services to people to fine-tune them for their own AI applications (more on this in the next section). Also, LLMs don’t actually “know” anything in general, but at this stage in particular, they are really just trying to mimic human language (or rather what they were trained to recognize as human language).
To help illustrate what this base LLM ‘intelligence’ looks like, there’s a thought exercise called the octopus test. In this scenario, two people (A & B) live alone on deserted islands, but can communicate with each other via text messages using a trans-oceanic cable. A hyper-intelligent octopus listens in on their conversations and after it learns A & B’s conversation patterns, it decides observation isn’t enough and cuts the line so that it can talk to A itself by impersonating B. So the thought exercise is this: At what level of conversation does A realize they’re not actually talking to B?
In theory, if A and the octopus stay in casual conversation (ie “Hi, how are you?” “Doing good! Ate some coconuts and stared at some waves, how about you?” “Nothing so exciting, but I’m about to go find some nuts.” “Sounds nice, have a good day!” “You too, talk to you tomorrow!”), there’s no reason for A to ever suspect or realize that they’re not actually talking to B because the octopus can mimic conversation perfectly and there’s no further evidence to cause suspicion.
However, what if A asks B what the weather is like on B’s island because A’s trying to determine if they should forage food today or save it for tomorrow? The octopus has zero understanding of what weather is because its never experienced it before. The octopus can only make guesses on how B might respond because it has no understanding of the context. It’s not clear yet if A would notice that they’re no longer talking to B -- maybe the octopus guesses correctly and A has no reason to believe they aren’t talking to B. Or maybe the octopus guessed wrong, but its guess wasn’t so wrong that A doesn’t reason that maybe B just doesn’t understand meteorology. Or maybe the octopus’s guess was so wrong that there was no way for A not to realize they’re no longer talking to B.
Another proposed scenario is that A’s found some delicious coconuts on their island and decide they want to share some with B, so A decides to build a catapult to send some coconuts to B. But when A tries to share their plans with B and ask for B’s opinions, the octopus can’t respond. This is a knowledge-intensive task -- even if the octopus understood what a catapult was, it’s also missing knowledge of B’s island and suggestions on things like where to aim. The octopus can avoid A’s questions or respond with total nonsense, but in either scenario, A realizes that they are no longer talking to B because the octopus doesn’t understand enough to simulate B’s response.
There are other scenarios in this thought exercise, but those cover three bases for LLM ‘intelligence’ pretty well: they can mimic general writing patterns pretty well, they can kind of handle very basic knowledge tasks, and they are very bad at knowledge-intensive tasks.
Now, as a note, the octopus test is not intended to be a measure of how the octopus fools A or any measure of ‘intelligence’ in the octopus, but rather show what the “octopus” (the LLM) might be missing in its inputs to provide good responses. Which brings us to the final 1% of training, the fine-tuning stages;
LLM Interfaces
As mentioned previously, LLMs only mimic language and have some key issues that need to be addressed:
LLM base models don’t like to answer questions nor do it well.
LLMs have token limitations. There’s a limit to how much input they can take in vs how long of a response they can return.
LLMs have no memory. They cannot retain the context or history of a conversation on their own.
LLMs are very bad at knowledge-intensive tasks. They need extra context and input to manage these.
However, there’s a limit to how much you can train a LLM. The specifics behind this don’t really matter so uh… *handwaves* very generally, it’s a matter of diminishing returns. You can get close to the end goal but you can never actually reach it, and you hit a point where you’re putting in a lot of work for little to no change. There’s also some other issues that pop up with too much training, but we don’t need to get into those.
You can still further refine models from the pre-training stage to overcome these inherent issues in LLM base models -- Vicuna-13b is an example of this (I think? Pretty sure? Someone fact check me on this lol).
(Vicuna-13b, side-note, is an open source chatbot model that was fine-tuned from the LLaMA model using conversation data from ShareGPT. It was developed by LMSYS, a research group founded by students and professors from UC Berkeley, UCSD, and CMU. Because so much information about how models are trained and developed is closed-source, hidden, or otherwise obscured, they research LLMs and develop their models specifically to release that research for the benefit of public knowledge, learning, and understanding.)
Back to my point, you can still refine and fine-tune LLM base models directly. However, by about the time GPT-2 was released, people had realized that the base models really like to complete documents and that they’re already really good at this even without further fine-tuning. So long as they gave the model a prompt that was formatted as a ‘document’ with enough background information alongside the desired input question, the model would answer the question by ‘finishing’ the document. This opened up an entire new branch in LLM development where instead of trying to coach the LLMs into performing tasks that weren’t native to their capabilities, they focused on ways to deliver information to the models in a way that took advantage of what they were already good at.
This is where LLM interfaces come in.
LLM interfaces (which I sometimes just refer to as “AI” or “AI interface” below; I’ve also seen people refer to these as “assistants”) are developed and fine-tuned for specific applications to act as a bridge between a user and a LLM and transform any query from the user into a viable input prompt for the LLM. Examples of these would be OpenAI’s ChatGPT and Google’s Bard. One of the key benefits to developing an AI interface is their adaptability, as rather than needing to restart the fine-tuning process for a LLM with every base update, an AI interface fine-tuned for one LLM engine can be refitted to an updated version or even a new LLM engine with minimal to no additional work. Take ChatGPT as an example -- when GPT-4 was released, OpenAI didn’t have to train or develop a new chat bot model fine-tuned specifically from GPT-4. They just ‘plugged in’ the already fine-tuned ChatGPT interface to the new GPT model. Even now, ChatGPT can submit prompts to either the GPT-3.5 or GPT-4 LLM engines depending on the user’s payment plan, rather than being two separate chat bots.
As I mentioned previously, LLMs have some inherent problems such as token limitations, no memory, and the inability to handle knowledge-intensive tasks. However, an input prompt that includes conversation history, extra context relevant to the user’s query, and instructions on how to deliver the response will result in a good quality response from the base LLM model. This is what I mean when I say an interface transforms a user’s query into a viable prompt -- rather than the user having to come up with all this extra info and formatting it into a proper document for the LLM to complete, the AI interface handles those responsibilities.
How exactly these interfaces do that varies from application to application. It really depends on what type of task the developers are trying to fine-tune the application for. There’s also a host of APIs that can be incorporated into these interfaces to customize user experience (such as APIs that identify inappropriate content and kill a user’s query, to APIs that allow users to speak a command or upload image prompts, stuff like that). However, some tasks are pretty consistent across each application, so let’s talk about a few of those:
Token management
As I said earlier, each LLM has a token limit per interaction and this token limitation includes both the input query and the output response.
The input prompt an interface delivers to a LLM can include a lot of things: the user’s query (obviously), but also extra information relevant to the query, conversation history, instructions on how to deliver its response (such as the tone, style, or ‘persona’ of the response), etc. How much extra information the interface pulls to include in the input prompt depends on the desired length of an output response and what sort of information pulled for the input prompt is prioritized by the application varies depending on what task it was developed for. (For example, a chatbot application would likely allocate more tokens to conversation history and output response length as compared to a program like Sudowrite* which probably prioritizes additional (context) content from the document over previous suggestions and the lengths of the output responses are much more restrained.)
(*Sudowrite is…kind of weird in how they list their program information. I’m 97% sure it’s a writer assistant interface that keys into the GPT series, but uhh…I might be wrong? Please don’t hold it against me if I am lol.)
Anyways, how the interface allocates tokens is generally determined by trial-and-error depending on what sort of end application the developer is aiming for and the token limit(s) their LLM engine(s) have.
tl;dr -- all LLMs have interaction token limits, the AI manages them so the user doesn’t have to.
Simulating short-term memory
LLMs have no memory. As far as they figure, every new query is a brand new start. So if you want to build on previous prompts and responses, you have to deliver the previous conversation to the LLM along with your new prompt.
AI interfaces do this for you by managing what’s called a ‘context window’. A context window is the amount of previous conversation history it saves and passes on to the LLM with a new query. How long a context window is and how it’s managed varies from application to application. Different token limits between different LLMs is the biggest restriction for how many tokens an AI can allocate to the context window. The most basic way of managing a context window is discarding context over the token limit on a first in, first out basis. However, some applications also have ways of stripping out extraneous parts of the context window to condense the conversation history, which lets them simulate a longer context window even if the amount of allocated tokens hasn’t changed.
Augmented context retrieval
Remember how I said earlier that LLMs are really bad at knowledge-intensive tasks? Augmented context retrieval is how people “inject knowledge” into LLMs.
Very basically, the user submits a query to the AI. The AI identifies keywords in that query, then runs those keywords through a secondary knowledge corpus and pulls up additional information relevant to those keywords, then delivers that information along with the user’s query as an input prompt to the LLM. The LLM can then process this extra info with the prompt and deliver a more useful/reliable response.
Also, very importantly: “knowledge-intensive” does not refer to higher level or complex thinking. Knowledge-intensive refers to something that requires a lot of background knowledge or context. Here’s an analogy for how LLMs handle knowledge-intensive tasks:
A friend tells you about a book you haven’t read, then you try to write a synopsis of it based on just what your friend told you about that book (see: every high school literature class). You’re most likely going to struggle to write that summary based solely on what your friend told you, because you don’t actually know what the book is about.
This is an example of a knowledge intensive task: to write a good summary on a book, you need to have actually read the book. In this analogy, augmented context retrieval would be the equivalent of you reading a few book reports and the wikipedia page for the book before writing the summary -- you still don’t know the book, but you have some good sources to reference to help you write a summary for it anyways.
This is also why it’s important to fact check a LLM’s responses, no matter how much the developers have fine-tuned their accuracy.
(*Sidenote, while AI does save previous conversation responses and use those to fine-tune models or sometimes even deliver as a part of a future input query, that’s not…really augmented context retrieval? The secondary knowledge corpus used for augmented context retrieval is…not exactly static, you can update and add to the knowledge corpus, but it’s a relatively fixed set of curated and verified data. The retrieval process for saved past responses isn’t dissimilar to augmented context retrieval, but it’s typically stored and handled separately.)
So, those are a few tasks LLM interfaces can manage to improve LLM responses and user experience. There’s other things they can manage or incorporate into their framework, this is by no means an exhaustive or even thorough list of what they can do. But moving on, let’s talk about ways to fine-tune AI. The exact hows aren't super necessary for our purposes, so very briefly;
Supervised fine-tuning
As a quick reminder, supervised learning means that the training data is labeled. In the case for this stage, the AI is given data with inputs that have specific outputs. The goal here is to coach the AI into delivering responses in specific ways to a specific degree of quality. When the AI starts recognizing the patterns in the training data, it can apply those patterns to future user inputs (AI is really good at pattern recognition, so this is taking advantage of that skill to apply it to native tasks AI is not as good at handling).
As a note, some models stop their training here (for example, Vicuna-13b stopped its training here). However there’s another two steps people can take to refine AI even further (as a note, they are listed separately but they go hand-in-hand);
Reward modeling
To improve the quality of LLM responses, people develop reward models to encourage the AIs to seek higher quality responses and avoid low quality responses during reinforcement learning. This explanation makes the AI sound like it’s a dog being trained with treats -- it’s not like that, don’t fall into AI anthropomorphism. Rating values just are applied to LLM responses and the AI is coded to try to get a high score for future responses.
For a very basic overview of reward modeling: given a specific set of data, the LLM generates a bunch of responses that are then given quality ratings by humans. The AI rates all of those responses on its own as well. Then using the human labeled data as the ‘ground truth’, the developers have the AI compare its ratings to the humans’ ratings using a loss function and adjust its parameters accordingly. Given enough data and training, the AI can begin to identify patterns and rate future responses from the LLM on its own (this process is basically the same way neural networks are trained in the pre-training stage).
On its own, reward modeling is not very useful. However, it becomes very useful for the next stage;
Reinforcement learning
So, the AI now has a reward model. That model is now fixed and will no longer change. Now the AI runs a bunch of prompts and generates a bunch of responses that it then rates based on its new reward model. Pathways that led to higher rated responses are given higher weights, pathways that led to lower rated responses are minimized. Again, I’m kind of breezing through the explanation for this because the exact how doesn’t really matter, but this is another way AI is coached to deliver certain types of responses.
You might’ve heard of the term reinforcement learning from human feedback (or RLHF for short) in regards to reward modeling and reinforcement learning because this is how ChatGPT developed its reward model. Users rated the AI’s responses and (after going through a group of moderators to check for outliers, trolls, and relevancy), these ratings were saved as the ‘ground truth’ data for the AI to adjust its own response ratings to. Part of why this made the news is because this method of developing reward model data worked way better than people expected it to. One of the key benefits was that even beyond checking for knowledge accuracy, this also helped fine-tune how that knowledge is delivered (ie two responses can contain the same information, but one could still be rated over another based on its wording).
As a quick side note, this stage can also be very prone to human bias. For example, the researchers rating ChatGPT’s responses favored lengthier explanations, so ChatGPT is now biased to delivering lengthier responses to queries. Just something to keep in mind.
So, something that’s really important to understand from these fine-tuning stages and for AI in general is how much of the AI’s capabilities are human regulated and monitored. AI is not continuously learning. The models are pre-trained to mimic human language patterns based on a set chunk of data and that learning stops after the pre-training stage is completed and the model is released. Any data incorporated during the fine-tuning stages for AI is humans guiding and coaching it to deliver preferred responses. A finished reward model is just as static as a LLM and its human biases echo through the reinforced learning stage.
People tend to assume that if something is human-like, it must be due to deeper human reasoning. But this AI anthropomorphism is…really bad. Consequences range from the term “AI hallucination” (which is defined as “when the AI says something false but thinks it is true,” except that is an absolute bullshit concept because AI doesn’t know what truth is), all the way to the (usually highly underpaid) human labor maintaining the “human-like” aspects of AI getting ignored and swept under the rug of anthropomorphization. I’m trying not to get into my personal opinions here so I’ll leave this at that, but if there’s any one thing I want people to take away from this monster of a post, it’s that AI’s “human” behavior is not only simulated but very much maintained by humans.
Anyways, to close this section out: The more you fine-tune an AI, the more narrow and specific it becomes in its application. It can still be very versatile in its use, but they are still developed for very specific tasks, and you need to keep that in mind if/when you choose to use it (I’ll return to this point in the final section).
84 notes · View notes
mystudentai · 11 months ago
Text
Ai Bot for Excel Formula
Ai Bot for Excel Formula : Student AI is an educational tool tailored to enhance your intellectual capabilities, featuring an affordable personal assistant dedicated to homework and learning.
0 notes
avertigo · 3 months ago
Text
10 Artificial Intelligence tools to increase your productivity
Do you think Artificial Intelligence (AI) is only for programmers or engineers? Error! AI is a tool that is within everyone’s reach, and if you don’t already use it because you think it’s “complicated”, you’re falling behind. In a world where technology is advancing by leaps and bounds, not taking advantage of these tools can put you at a disadvantage. But don’t worry, you don’t need to be…
0 notes
iwan1979 · 3 years ago
Link
Building Microsoft Excel formulas doesn't have to be a daunting task anymore, thanks to this free AI bot available for anyone to use.
0 notes
warsofasoiaf · 5 years ago
Note
Have you played Fallout 4? What did you think of it?
Joseph Anderson had a phenomenal video on Fallout 4. Although it is enormous, so be careful. Overall, there were things to like and things not to like about Fallout 4. I’ll start with what I liked first. Throwing a cut in here because it’s long.
Combat in the first-person Fallout games has always been clunky, and enemy AI relatively largely consisted of straight charging or shooting from as maximum range as possible. Difficulty came primarily from enemy quantity, high damage output, or incredibly enemy hitpoints. The last of these has been a particular Bethesda problem in their games, with enemies being incredible damage sponges, making late-game fights a boring slog as you slowly whittle down their health while being impossible to damage in any meaningful capacity. While enemy variations aren’t nearly as high as the game’s fans would have you believe if you conceive of them as AI patterns, the AI activity did have some nice variations. Human enemies used cover, ghouls bobbed and weaved as you shot them, mole rats tried to ambush you. It’s got nothing on games with fully realized combat system, but it does make the combat that you do engage in much more enjoyable. 
All of the random crap you can pick up in a Bethesda game having a purpose is another positive. It is a true nuisance to find out when playing a game that I hit my encumbrance limit only to find out it’s because I’ve picked up a bunch of brooms, bowls, and other garbage accidentally while grabbing coin and other worthwhile treasures. Actually having these things mean an object is worthy mechanically, aside from level design; typewriters are useful as items as opposed to something that shows you that the ruined building you’re in was formerly a newspaper. As crafting is a big portion of the game, having these things provide component parts that you use for crafting on their own creates more utility in these elements of clutter which still require modeling, rendering, placement, etc. Now if you need aluminum, you’ll try to raid something like a cannery because it will have aluminum cans, which is an excellent way to create player-generated initiative. It also reinforces one of the primary themes of the game which is crafting and design, where even the trailers of the game suggest building as a key idea of the game. Certainly sensible for a post-apocalyptic game to focus on building a new society upon the ruins of the older one, and given what the game was trying to do with their four factions mechanic, it’s clear that this was their intent, and good job for trying to ensure that things factor back into their principal intent. 
Deathclaws look properly scary, the animations with Vault Boy were funny, there’s some pretty window dressing. The voice work wasn’t bad, the notable standout being Nick Valentine. The Brotherhood airship was an impressive visual. I had a little fun creating some basic settlements, particularly in Hangman’s Alley where I tried to create a network of suspended buildings and Spectacle Island where I had room to grant every prospective settler a shack. Bethesda clearly looked to create a game with mass market appeal, and I believe the metrics bears out that they succeeded in that regard. The robots in the USS Constitution quest were very funny, the writers were able to make the absolute ridiculousness of the situation work (curse you Weatherby Savings and Loan!) and framed it well as a comedic sidequest, with a final impressive visual if you side with the bots and the ship takes flight.
Now that this is out of the way, I think that a lot of what Fallout 4 did was not the right move. 
The quest design was particularly atrocious in this regard. Most of the radiant quests boiled them down to a simple formula - go to the dungeon, get to the final room where you need to either kill the boss or get an item from the boss chest, return. In this game though, the main story quests often were boiled down to just this simple formula. You need to find a doodad from a Courser to complete your teleporter? Go to the dungeon, kill the boss, recover the item. The Railroad needs you to help an escaped synth! Do it by going to the dungeon and getting to the final room. This really hampers the enjoyment of games because the expressiveness of the setting and elements of an RPG is often explored through quests. Quests are meant to get you out into the world and give you an objective, but they are also meant to connect you to the people that you’re dealing with. If every quest is boiled down to the same procedure, that hurts the immersion, but the bigger sin is that when you return you have another quest waiting for you. That robs the player of the sense of accomplishment because there is no permanent solution to problems, even for a minute. There is no different end-state for the player to see the transition from one to the other and feel accomplished that they were the ones who did it. Other RPG’s always understood this - a D&D game might have a party save a town investigate an illness dealing with a town, take out an evil druid who has charmed the wildlife into attacking supply and trade shipments, slay goblins who are raiding cattle, there are a lot of possibilities that might even feel samey: if you’re killing charmed dire wolves or goblin cattle thieves, you’re still going to the dungeon and fighting the boss, the usual flair and variation came from encounter design. After you’d do that though, the NPC’s might say “Hey, Mom is feeling better after you cured that disease, she’s starting to walk again,” “Hey, we were able to send a shipment of wine from the vineyards out to the capital, here’s some coin for the shipment as reward for your service,” or even just a simple “Hey, thanks for taking out those cattle thieves.” There’s a sense of accomplishment even if it’s a fleeting “we did a cool thing.” Computer RPG’s are tougher in this regard, part of the sense of accomplishment in tabletop gaming is also with your friends, it’s a shared activity, but usually in that the reward was some experience and character growth and going to new content. There isn’t new content here in Fallout 4 though, because of the samey quest design and lack of progression.
The conversational depth was also ruined, with so much of the voice choices mangled by the system of conversation they designed. By demanding a four-choice system, they limited themselves to always requiring four options which completely mangled interactivity. The previous menu design allowed for as many lines as you wanted, even if the choices were usually beads on a string. The depth and variation, however, are even lower than what could be found in games like Mass Effect 3, and the small word descriptions were often so inaccurate that it created a massive disconnect between myself the player and the Sole Survivor, because they weren’t saying what I thought they would be saying. That prevented me from feeling immersed, because a “Sarcastic” option could be a witty joke or a threat that sounds like it should come out of a bouncer. The character options were already limited, with Nate being a veteran and Nora being a lawyer, but this lack of depth prevents me from feeling the character even moreso than a scripted backstory. You get those in games, but being unable to predict how I’m reacting is something that kills character. 
Bethesda needs to end the “find (x) loved one” as a means to get people motivated to do a quest, or if they don’t want to rid themselves of that tool in their toolbox, they need to do a better job getting me to like them. More linear games can get away with this, but open world games encourage the sort of idle dicking around that doesn’t make any sense for a person who is attempting to find a family member. Morrowind did this much better, where your main task was to be an Imperial agent, and you were encouraged to join other factions and do quests as a means to establish a cover identity and get more acquainted with combat. Folks who didn’t usually ended up going to Hasphat Antabolius and getting their face kicked in by Snowy Granius. Here though, what sort of parent am I if instead of pursuing a lead to find my infant son I’m wandering over east because I saw what looked like a cool ruin, and I need XP to get my next perk (another gripe, perks that are simple percentage increases because they slow down advancement and make combat a slog if you don’t take them, depressing what should be a sense of accomplishment). By making us try to feel close with a character but by refusing to give us the players time with them, there is no sense of bonding. I felt more connection to James in Fallout 3 than I did for Sean, but even then, I felt more connection to him because he was voiced by Liam Neeson than because of any sense of fatherly affection. The same goes for the spouse and baby Sean, I feel little for them because I see them only a little. I know that I should care more, but I also know that I the player don’t because all that I was given is “you should care about them.” You need time to get to know characters in game, along with good writing and voicework. I like Nick because he quoted “The Raven” when seeing the Brotherhood airship and I thought that was excellent writing, I didn’t have any experiences with Sean to give me that same sense of bonding. 
They’ve also ruined the worldbuilding. The first-person Fallout games have always had a problem with this, with Fallout 3 recycling Super Mutants, the Brotherhood of Steel, and other iconic Fallout things into Washington D.C. Part of this is almost certainly the same reason that The Force Awakens was such a dull rehash of the plot of A New Hope, they wanted to establish some sort of continuity with a new director to not frighten off old fans who they relied on to provide a significant majority of the sales. The problem of course, is that this runs into significant continuity problems, now needing Vault 87 to have a strain of FEV and having a joint Vault-Tec/US Government experiment program there on the East Coast, so we can have Super Mutants. Jackson’s chameleon isn’t native to Washington D.C., but we need to have Deathclaws because they’re the iconic scary Fallout enemy, as opposed to creating something new with the local fauna, which is only made worse because they did do that with the yao guai formed from the American black bear (the black bear doesn’t typically range in the Chesapeake Basin near DC these days, but it’s close enough and given the loss of humans to force them back they could easily return to their old pre-human rangings). Some creatures are functions of the overall setting and can be global, ghouls are the big one here since radiation would be a global thing and fitting considering Fallout is a post-apocalypse specifically destroyed by nuclear war. Others though, are clearly mutated creatures and so they would be more localized. Centaurs and floaters were designed by FEV experiments and collared by Super Mutants, they should really only be around Super Mutants. Radscorpions shouldn’t be around, there would probably be instead be mutated spiders. Making things worse are that the monster designers do develop some excellent enemies when they think about it. Far Harbor has a mutant hermit crab that uses a truck as a shell (a lobster restaurant truck, which is passable enough for a visual joke even if it falls apart when you think about other trucks that they might use) and a monster that uses an angler lure that resembles a crafting component - these are good ideas but the developers needed to awkwardly shoehorn in iconic Fallout things that have no place there. This isn’t to say that I’m in love with a lot of Fallout’s worldbuilding, a lot of the stuff in Fallout 2 I found to be kind of dumb particularly the talking deathclaws, but as the series went on it took objects without meaning. The G.E.C.K in Fallout 3 was pretty much a magic recombinator which makes no sense as a technology in a world devastated by resource collapse, something similar can be said about the Sierra Madre vending machines. 
Fallout 4 though, had a lot of worldbuilding inconsistencies that really took an axe to the setting. The boy in the fridge outlasts the entire Great War, but apparently never needed to eat or drink water. This is, of course, stupid, because ghouls have always been shown to need to eat and drink - Fallout 1′s Necropolis section has a Water Chip but if you take it without finding an alternate source of clean water, the ghouls will die. Ghoul settler NPC’s that flock to your player-crafted towns require food and water. The entire thing was ruined from a complete lack of care, to build a quest where you reunite a lost boy with his still-alive ghoulified parents. I think this one bothers me not simply because of the egregious worldbuilding which isn’t even consistent in the very game it’s written it, but it’s done so frivolously for a boring escort quest. It feels scattershot, and that’s the problem I think with a lot of Fallout 4′s quests. They feel disconnected, like every writer worked in a cubicle without talking to any of the other writers. Same with things like the Lady in the Fog.
Are we done with that? Good, because now we’re going into the parts that I really dislike - the main quest and the factions. These are just awful. The developers took what folks really liked when it came to Fallout 2 and Fallout: New Vegas (Fallout 1 did have interesting factions but they were largely self-contained, more towns than anything else) and completely botched it. New Vegas was the clear inspiration for these factions, with the four faction model of NCR, Legion, House, and Indepenedent meaning that there were four different ways to go forward into the future, so we get three factions that fight each other and a fourth more player friendly faction that roughly resembles the Independent Vegas where you can pick and choose which factions you bring in with you and which you get rid of. Thematically, this fits in with the core of the game, crafting is a big portion of what you do and so crafting what sort of world the Commonwealth would be is simply a logical extension of it. The factions aren’t presented well though. The Railroad are impossibly naive and don’t demonstrate any rougher edges like denying supplies to humans in order to fuel their synth effort, even though such a thing should be evident if the post-apocalypse of the Commonwealth is to be believed. The Institute are sinister murderers and replacers without bringing any of the advanced technology that could provide some benefit such as the gigantic orange gourd that can grow. So much of their kill-and-replace mentality seems to be done for no great overarching purpose. The Minutemen are basically blank, pretty much just a catch-all for the player-built settlements, though the player as the leader of the Minutemen ends up getting bossed around by Preston to the point of the faction rejecting your commands to proceed with the main quest, a significant problem with Bethesda factions where you are the leader but never get any actual sense of leadership. There doesn’t appear to be any addressing of the failures of the previous Minutemen whether that be the previous summit, or new problems such as settlements feuding with each other requiring the general to intervene and mediate. The Brotherhood come the closest to a real faction with advantages and drawbacks if you squint, they are feudal overlords with the firepower to fight Super Mutants and other mutated nasties, but also violently reject ghouls and synths as part of their violent dogma except for seemingly not caring when you bring a companion around or killing ghoul settlers in settlements they control. But even then, we don’t really see the Brotherhood providing protection to the settlements that they demand for food, the typical radiant quest to destroy a pack of feral ghouls or super mutants is directed from a Brotherhood quest giver to a randomly determined location, hardly a good way to illustrate whether or not the Brotherhood is actually protecting settlements that they administer. We see little change in the way of the Commonwealth save that certain factions are alive or not because the game needs to stay active in order to perform radiant quests, so not even the signature ending slideshows can give us the illusion of effects building off of our actions. This is contrary to the theme of building a better world in the Commonwealth because there is no building. 
Special notice must be given to the Nuka-World raiders because they show the big problems with the factions. You can be a Raider in Nuka-World but only after becoming the Overboss, which is fair enough. But you’re already a Minuteman, but the Minutemen don’t activate any kill-on-sight order and Preston still helps you out. The game is so terrified of people losing out on content that they make permanent consequences rare, and when you do something like order an attack, it can be rescinded automatically if one of your companions is there. As an Overboss, you do grunt work in the Commonwealth, and the factions get mad and pissy if you don’t give them things despite even if you only give one section of the park to one of the factions, that’s more than they got from Colter. It’s like they don’t exist until the player shows up, which is exactly how a lot of modern Bethesda character and faction building seems to be. While in most computer games a sort of uneasy status quo is the desired beginning state because it gives the protagonist the chance to make ripples while justifying the existence of a status that allows the player to change it, it has to be applied consistently. 
The main quest itself is silly. There’s a decent twist with Sean becoming Father that sort of works, which would have worked much better if we had actually gotten a chance to bond with him, although the continuity of everything gets wiggy quick. When he said that he looked over the world and saw nothing but despair, I was wondering if they were going to actually bring a big question up and a debate between Father and the Player, the idea of what worth the people on the surface have, but it goes nowhere, it’s a missed opportunity. The main quest is just a means to meet all four factions and it’s a barebones skeleton at best. There are some interesting concepts they try, but what they do often falls flat. They try to establish some sort of empathy for Kellogg in the memory den, but it’s lazy and cheap because he kidnaps a baby and wastes your spouse, a wasted effort of empathy only made worse when you get criticized for not showing any sympathy. Kellogg then shows up in Nick’s memory for one second and then that little story nugget is ignored. The half-baked nature of the story keeps being brought back up, which is a pity because we actually saw them do a competent job in Far Harbor. The Followers of Atom are crazy and they really aren’t sympathetic in any way, but some of the folks inside the sub aren’t so bad that it might prevent you from wanting to detonate the sub, or at least you might think enough that you look for another solution. DiMA did some monstrous things, and if you bring him to justice, the game actually takes the time to evaluate whether or not you helped out Far Harbor, with meaningful consequences being taken if you took the time to do the sidequests which imparts far more meaning to them. 
While there’s a lot of problems that show up in terms of binary completion, the question of whether to replace Tektus and turn the Children of Atom to a more moderate path is a good question, it actually gives a lot more merit to the Institute if they were ever to have been shown to enact the same level of care. That only makes the Fallout problems stand out more, because it shows that they were capable of it but didn’t. This isn’t the only missed opportunity, synths themselves become a big problem. The goal was to create a very paranoid feeling but it was so sorely under-utilized that I never grew suspicious of folks because the game never gave me enough incentive to be suspicious of them. I didn’t think that Bethesda made synths that would give you false information or ambush you because that would have been potentially missed content. The idea of whether you are a synth or not is clearly an attempt to give the game more depth than it is presenting. You’re not a synth, Father’s actions make no sense if you are one, and DiMA attempting to make you think you are is silly because you know you aren’t one.
I think the game would have been much better if they had dropped the notion of Fallout entirely. If they had instead looked to create an open-world post-apocalyptic game focusing on crafting and building towns, perhaps with an eventual goal state of building many towns, establishing transportation networks, and rebuilding a junkyard society as a decent place (or going full Mad Max Bartertown complete with a Thunderdome for players looking for an evil and over-the-top option). That might have been an interesting game for Bethesda to potentially develop a new IP, even contracting with smaller studios for those who wish to tell story-heavy games in the setting. Instead, they applied Fallout like a bad paint job, cobbling together weak RP elements and story that made the game feel like a hydra that couldn’t recognize it was one being with multiple heads, constantly tearing the other parts of itself to ribbons. 
If I wanted to further improve it, I think I would have instead made the spouse a synth. It would require some serious reworking, but I would have made it so that Sean did believe that synths were people, or that they were real enough that the difference was negligible, they had free will. During the initial grab, the Institute took the entire cryopod where Sean was, baby and parent both. They used Sean to create the next generation of synths, but something happened with the parent, and they died during defrost. Sean hates the Institute for what they did, but what happened was truly a medical complication, not malicious in any way. When he learns that the player character is active, he creates a synth programmed to believe they are the spouse. He believes that exposing who he really is to the surviving parent would be traumatic, and as he hears that the player character is thriving, he wants to give them a chance at a normal life, and to alleviate the loss that he had in his life with the loss of his own parents. So the spouse is sent to you, and for a long time, you and the spouse have no idea. You adventure together, you build settlements together, the game encourages you to have a good relationship. It doesn’t have to be hunky dory, and I’d argue it’s actually better if it’s not. Have the spouse be programmed with some rough experiences in the Wasteland, so they’re nervous, skittish, maybe even a little resentful that the player character snoozed their way through everything, but slowly rebuild the relationship. That way, when the quest eventually comes where you find the truth, the player character has to confront that reality. Then when you confront Sean, Sean explains himself and the player is given the choice to forgive him, be understanding but still angry, or be hugely pissed at the manipulation. That’s drama that uses the core theme of what synths are about with the whole kill-and-replace motif the Institute does. There’s a plot twist that batters the player, there’s one that’s just messy and gross and tough to reconcile. There’s one where the conclusion the player comes to is valid because it’s the player themselves deciding what the meaning of it is.
So overall, I see Fallout 4 as a bunch of missed opportunities and clumsy writing wrapped up in the popular shallow open-worlds that triple-A games end up having. 
Thanks for the question, Jackie.
SomethingLikeALawyer, Hand of the King
26 notes · View notes
copperdigitalinc · 5 years ago
Text
5 Stages of Patient Experience – mHealth Apps
Prevention
We all are familiar with the adage ‘Prevention is better than cure.’ Throw in modern technology to the mix, and you may have a winning formula to ensure that we can prevent a disease/condition by taking educated steps at different levels.
Tumblr media
Data-Driven
A data-driven approach is about calling real-time data based APIs from verified sources and health care authorities such as WHO, CDC, etc. EHR or Electronic health records have defined standards by which data gets shared with health care providers. You may want to read on the FHIR standards which streamline calling APIs such as Restful and help visualize this data in the form of JSON, XML or RDF
Tumblr media
Employer-Based
These would fall under the purview of B2E applications for an organization. Encouraging employees to monitor, track, and maintain their fitness levels, not only helps employers save on the medical bills but also promotes well being within the organization.
Some of the employers who have implemented such programs through technology have ensured that employees are healthier and are less likely to skip work due to ill health.
Community-Based
A community-based approach to tracking, prevention, and intervention is perhaps one of the best ways to adapt to individuals’ overall well-being. A mobile application can help the support group stay connected and intervene at the right moment, whether it’s an individual seeking community support on alcohol addiction or drug abuse.
Risk Assessment
In our ongoing battle with COVID 19, not only in the US but around the world, mobile apps are paving the way for better community-level information transmission and prevention checks. Apps like COVIDSafe published by the government of Australia is helping communities beat the curb by identifying close contacts with the infected citizens and helping them take measures at a regional level.
Risk Assessment and countermeasures through a mobile application have become exceedingly important in today’s world.
Applications like BURN MD, help doctors, access the burn areas for a victim and quickly analyze the risk without having to document it on a paper-based process. This application helps save a life and is extremely important in the assessment and planning process in surgeries. You may want to build an app that addresses the risk assessment aspect in the patient’s journey, which can be pivotal in saving lives.
Tumblr media
Diagnosis
Gone are the days when we had to physically do the rounds of a healthcare facility, take appointments of busy doctors, and wait in endless lines to diagnose a condition. With modern technology, we can diagnose a condition by making an informed decision through communication with qualified doctors and physicians on a mobile device.
Many applications that support in Diagnosis of a condition are doing the rounds and have become popular, such as ADA, with more than five million downloads. Such applications should help us with a ballpark estimation of the condition to help us get in touch with the healthcare facilities for further insights. It is much better to get a good reference point rather than being clueless and panicking.
Many applications can help connect with diagnostic services as well for better patient experience. For example, if a patient wants his blood samples taken, it could be scheduled with the mhealth app, and the report results pushed to the patient’s mobile device. The engagement process becomes streamlined and directly enhances the patient’s experience.
Telemedicine
Tumblr media
Some of the problems hampering the patient’s journey are practical scenarios and challenges we face every day. Sometimes we cannot find time in our schedule to see a doctor or the location we’re at makes it difficult to make it happen.
Telemedicine has proved to be the bridge between a patient and a doctor, without needing to step into a physical office for non-emergency conditions. With such applications, it becomes elementary to consult your doctor through your mobile device and get remote help. Apps such as MDlive help connect you to pediatricians, behavioral health service, and psychiatry whenever required, without needing to visit the health care facilities physically.
Similarly, apps like Lemonaid, help connect with a doctor for diagnosis/consultation of the medical condition and ship the prescription medicines on the same day! Not only is it convenient and fast, but it also helps save a substantial amount of money by paying per-consultation fees as low as $25.
If you’re building a telemedicine app, make sure that UI/UX is following, making the engagement between a patient and a caregiver easier with intuitive screens and the functionalities in the engagement process like the video/text chat document sharing is fast and efficient.
Bots
 You may have seen bots answering your questions on a website. These bots are called chatbots and are advanced enough to answer simple queries and point them in the right direction. However, due to recent advancements in artificial intelligence, these bots can do much more than answering your simple questions.
Google’s Duplex bot was a gamechanger in the field of what AI bots. Released in 2018, it is capable of human-sounding conversation models, so much so that it becomes difficult to differentiate it from a human. There are mood-improving bots like Woebot whose sole purpose is to make you feel better by showing empathy while giving a logical consultation like a psychologist!
Bots in healthcare offer more than just a conversation. Their tasks can cover a wide array of activities from medication management, organizing patient’s choices, providing advice on more straightforward medical issues, or reaching out to medical care facilities during an emergency.
Tumblr media
Estimates
One of the most critical questions that every patient seeks to answer is, how much would treatment cost? A mobile application may help answer that question with an estimation. There are quite a few apps available on the app stores like Aetna, which can find a doctor and see what procedures may cost.
While building such an application, you would have to find the cost of such procedures and compile your database or deploy big data analytics to do the same.
Appointments and Referrals
A simple application which schedules one’s appointments can come in handy for any patient to avoid endless queues. Appointment/scheduling mhealth apps have surged in high demand due to the recent pandemic. Every healthcare facility following social distancing and trying to avoid crowds in the lobby and waiting areas.
Apps like Appointfix helps the patients to fix an appointment round the clock. Moreover, it sends reminders in advance, which reduces ‘no shows’ at the hospitals. Every visit gets documented in one place, and simple reports viewed and exported.
Treatment
When you’re getting yourself treated, the first things you would want to discover are the healthcare facilities and the doctors who treat you nearby. Mobile apps built to facilitate searches for these healthcare facilities and provide insights about medical practitioners are often very popular and let people make an informed decision. In the case of emergency apps like EMNet, help you find emergency rooms all across the hospitals in the US.
Conclusion
While building a mhealth app, you must address these five essential aspects of the patient journey. At Coppermobile, following these five principles has ensured that the mhealth apps we build focus towards an excellent overall patient experience!
Source https://bit.ly/2Q6akqH
1 note · View note
webiatorstechnologies · 3 years ago
Text
The Beginner's Guide to Magento 2 Google recaptcha
The step-by-step approach is not enough anymore. Google has placed a recaptcha between users and their queries in an effort to block bots from abusing the website. Magento 2 Google recaptcha is here to help you in your efforts to authenticate legitimate users before they can access your blog/site content. Magento 2 Google recaptcha uses the latest cutting-edge technology and servers, with outstanding traffic performance and an infinite number of customized forms for each of your requirements. We offer a variety of options, so you can choose among form layouts, security methods and captchas variations, with our Magento specialists offering expertise on all of these topics.
Tumblr media
Sleep well knowing that your website traffic is now protected from spammers, robots and any other form of cyber-threat. Don't waste time, but contact us now for a free consultation! What Are Robots/Spammers Doing? Spammers are masters at trying to find loopholes in order to gain access to popular sites. They develop bots, designed to quickly collect email addresses, entering them in online forms without the site owners' permission. The more emails they can collect, the more money they can make off this spam. You may have already seen forms on Magento 2 sites with Captcha fields inserted - this is an anti-spam measure for most popular websites today. Protect Your Magento 2 Site With ReCaptcha One of the most frustrating aspects of online shopping is the large amount of spam you receive. These simple and cruel forms create a burden for both your site's reputation and traffic. You can't just pick up the phone and call them, though- Spam bots are too fast for that. Instead, you'll need to use an anti-spam technology such as ReCaptcha to keep the bots at bay. Captcha is a little box that appears on websites that ask you to complete your form with an email address, name, or other information. The first thing you'll notice about captchas is that they're really simple to use. No need to install any software, no need for complicated settings, and most importantly, no need for a computer or internet connection. Captcha uses a text-based challenge that you have to translate into a four-digit number in order to continue with your browsing experience. How Does It Work? ReCaptcha depends on a lot of modern technology aspects. First of all, it's been designed from the ground up with the help of Artificial Intelligence (AI) and machine learning algorithms. This allows it to adapt and adapt to the newest spam attacks in order to keep your site secure. The captcha will also use a challenge-response system in order to identify potential spammers. The Captcha formula is being used today by over 175,000 websites across the globe. This form of security has been a mainstay of many website owners for a long time, because it appears over and over again over and over again. So if you want the maximum protection from bots, installing magento 2 google captcha extension is an excellent idea. As you may have guessed from its name, ReCaptcha is aimed at preventing automated bot attacks that can see your site's content before you do. When it comes to making sure that you're not being spammed, there's no other technology on the market that's as simple to use, or as reliable. Please, contact us by phone or email if you need further assistance with your Magento 2 Google ReCaptcha implementation. We are ready to help! Our Services We offer exceptional support for google invisible recaptcha magento 2 extension installation. Our specialists will take care of every detail in order to make sure that everything goes without a hitch. We can even help you by setting up more security features and enhancing your website with customization services. To know more, visit our Magento 2 Extensions Store - https://store.webiators.com/ Original Source https://bityl.co/ALEQ
0 notes