#examples of application software
Explore tagged Tumblr posts
Text

0 notes
Text
Read this comprehensive guide to build a successful Minimum Viable Product (MVP) and turn your next big idea into a successful, customer-centric product.
#what is mvp#build mvp#mvp software development#examples of mvp#Minimum Viable Product#Small Business#MVP Cost#Software Development#Software Application Development#Mobile App Development
0 notes
Text
In India, many development companies design sports-related apps. Sports Application Development Apps have various functions such as scoring in real-time statistics. This type of app provides information about games and sporting clubs. If we invest in Sports applications then your businesses grow in the future. Helpfulinsightsolution Provides the best information in every blog.

#Sports application development course#Sports application development tools#Sports application development software#Sports application development examples#sports app ideas#sports game development
0 notes
Text
Have you heard of Backend for Frontend (BFF)? Read our blog to know about its transformative potential in web development, where it elevates user experiences, tackles challenges, and optimizes your web development journey.
#Softwaredevelopmentservices#Ascendion#NitorInfotech#software development#web apps development#web application development#web application#example of an api#backend end#company development software#software company#software businesses
0 notes
Text
ERP Business Software Enterprise Resource Planning (ERP) software is a comprehensive business management solution that integrates various functional areas of an organization into a centralized system. It provides a suite of applications and tools to streamline business processes, improve efficiency, and enhance decision-making.
Implementing an ERP system requires careful planning, stakeholder involvement, and training. It is crucial to select an ERP software that aligns with the specific needs of the business and can be tailored to suit its requirements. By leveraging ERP software, businesses can streamline processes, improve efficiency, enhance decision-making, and gain a competitive advantage in the market. To know more, browse https://lsi-scheduling.com/
#erp business software#erp business software solutions#erp enterprise software#erp business solutions#erp system business process#erp software system requirements#best enterprise erp software#erp business systems#erp business applications#erp enterprise applications#erp benefits for business#erp in business process#what is erp in business#example of erp in business#erp system software#enterprise planning system#enterprise planning software#erp production scheduling
0 notes
Text
It's crazy that so many websites and applications whatever other example of software can reach a point where everyone likes it and it works well and people are happy and then the company that makes the app puts out a UI update for seemingly no reason and it's universally hated by every user. This happens continuously to everything and not once has it been acknowledged by anyone in the industry that this is not a good thing to do.
559 notes
·
View notes
Text
The bill in question is Senate Bill 20. Sponsored by GOP senator Pete Flores, the bill, in its own words, “creates a new state offense for the possession or promotion of obscene material that appears to depict a child younger than 18 years old, regardless of whether the depiction is of an actual child, cartoon or animation, or an image created using an artificial intelligence application or other computer software.” Furthermore, subparagraph c of the bill states: “An offense under this section is a state jail felony,” directing first-time offenders convicted under this law to a minimum of five years in prison, simply for owning or viewing such material. As many responses have pointed out, many popular manga would be banned in Texas if this law is signed into law, which certainly seems like it will at this point. Goblin Slayer is the example many are pointing to, due to the infamous assault scenes of characters by goblins in the early issues.
We haven't even been winning the culture war for a year and conservatives are already speedrunning ways to lose it again. I shouldn't be surprised by these kinds of self-inflicted Ls, but I am, and they still piss me off.
Censorship is evil and wrong. And this is pure censorship. Expanding the definition of child porn to include fake images of children that don't exist does absolutely nothing to protect real children. It just gives the anti-porn puritans something to ban. Because that's what censorious puritans do. They ban everything they don't like, because fuck freedom, personal choice, and the 1st Amendment, right?
"It's okay when we do it" isn't just the motto of the Democrats. It's an increasingly loud rallying cry from people on the right who, not even a year ago, were rightfully pushing back against the left censoring media that didn't fit their moral standards. And it's what's going to lead us back into our nice, echoey caves once the culture war turns against us and we retreat like we always do.
225 notes
·
View notes
Text
Humans are not perfectly vigilant

I'm on tour with my new, nationally bestselling novel The Bezzle! Catch me in BOSTON with Randall "XKCD" Munroe (Apr 11), then PROVIDENCE (Apr 12), and beyond!
Here's a fun AI story: a security researcher noticed that large companies' AI-authored source-code repeatedly referenced a nonexistent library (an AI "hallucination"), so he created a (defanged) malicious library with that name and uploaded it, and thousands of developers automatically downloaded and incorporated it as they compiled the code:
https://www.theregister.com/2024/03/28/ai_bots_hallucinate_software_packages/
These "hallucinations" are a stubbornly persistent feature of large language models, because these models only give the illusion of understanding; in reality, they are just sophisticated forms of autocomplete, drawing on huge databases to make shrewd (but reliably fallible) guesses about which word comes next:
https://dl.acm.org/doi/10.1145/3442188.3445922
Guessing the next word without understanding the meaning of the resulting sentence makes unsupervised LLMs unsuitable for high-stakes tasks. The whole AI bubble is based on convincing investors that one or more of the following is true:
There are low-stakes, high-value tasks that will recoup the massive costs of AI training and operation;
There are high-stakes, high-value tasks that can be made cheaper by adding an AI to a human operator;
Adding more training data to an AI will make it stop hallucinating, so that it can take over high-stakes, high-value tasks without a "human in the loop."
These are dubious propositions. There's a universe of low-stakes, low-value tasks – political disinformation, spam, fraud, academic cheating, nonconsensual porn, dialog for video-game NPCs – but none of them seem likely to generate enough revenue for AI companies to justify the billions spent on models, nor the trillions in valuation attributed to AI companies:
https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/
The proposition that increasing training data will decrease hallucinations is hotly contested among AI practitioners. I confess that I don't know enough about AI to evaluate opposing sides' claims, but even if you stipulate that adding lots of human-generated training data will make the software a better guesser, there's a serious problem. All those low-value, low-stakes applications are flooding the internet with botshit. After all, the one thing AI is unarguably very good at is producing bullshit at scale. As the web becomes an anaerobic lagoon for botshit, the quantum of human-generated "content" in any internet core sample is dwindling to homeopathic levels:
https://pluralistic.net/2024/03/14/inhuman-centipede/#enshittibottification
This means that adding another order of magnitude more training data to AI won't just add massive computational expense – the data will be many orders of magnitude more expensive to acquire, even without factoring in the additional liability arising from new legal theories about scraping:
https://pluralistic.net/2023/09/17/how-to-think-about-scraping/
That leaves us with "humans in the loop" – the idea that an AI's business model is selling software to businesses that will pair it with human operators who will closely scrutinize the code's guesses. There's a version of this that sounds plausible – the one in which the human operator is in charge, and the AI acts as an eternally vigilant "sanity check" on the human's activities.
For example, my car has a system that notices when I activate my blinker while there's another car in my blind-spot. I'm pretty consistent about checking my blind spot, but I'm also a fallible human and there've been a couple times where the alert saved me from making a potentially dangerous maneuver. As disciplined as I am, I'm also sometimes forgetful about turning off lights, or waking up in time for work, or remembering someone's phone number (or birthday). I like having an automated system that does the robotically perfect trick of never forgetting something important.
There's a name for this in automation circles: a "centaur." I'm the human head, and I've fused with a powerful robot body that supports me, doing things that humans are innately bad at.
That's the good kind of automation, and we all benefit from it. But it only takes a small twist to turn this good automation into a nightmare. I'm speaking here of the reverse-centaur: automation in which the computer is in charge, bossing a human around so it can get its job done. Think of Amazon warehouse workers, who wear haptic bracelets and are continuously observed by AI cameras as autonomous shelves shuttle in front of them and demand that they pick and pack items at a pace that destroys their bodies and drives them mad:
https://pluralistic.net/2022/04/17/revenge-of-the-chickenized-reverse-centaurs/
Automation centaurs are great: they relieve humans of drudgework and let them focus on the creative and satisfying parts of their jobs. That's how AI-assisted coding is pitched: rather than looking up tricky syntax and other tedious programming tasks, an AI "co-pilot" is billed as freeing up its human "pilot" to focus on the creative puzzle-solving that makes coding so satisfying.
But an hallucinating AI is a terrible co-pilot. It's just good enough to get the job done much of the time, but it also sneakily inserts booby-traps that are statistically guaranteed to look as plausible as the good code (that's what a next-word-guessing program does: guesses the statistically most likely word).
This turns AI-"assisted" coders into reverse centaurs. The AI can churn out code at superhuman speed, and you, the human in the loop, must maintain perfect vigilance and attention as you review that code, spotting the cleverly disguised hooks for malicious code that the AI can't be prevented from inserting into its code. As "Lena" writes, "code review [is] difficult relative to writing new code":
https://twitter.com/qntm/status/1773779967521780169
Why is that? "Passively reading someone else's code just doesn't engage my brain in the same way. It's harder to do properly":
https://twitter.com/qntm/status/1773780355708764665
There's a name for this phenomenon: "automation blindness." Humans are just not equipped for eternal vigilance. We get good at spotting patterns that occur frequently – so good that we miss the anomalies. That's why TSA agents are so good at spotting harmless shampoo bottles on X-rays, even as they miss nearly every gun and bomb that a red team smuggles through their checkpoints:
https://pluralistic.net/2023/08/23/automation-blindness/#humans-in-the-loop
"Lena"'s thread points out that this is as true for AI-assisted driving as it is for AI-assisted coding: "self-driving cars replace the experience of driving with the experience of being a driving instructor":
https://twitter.com/qntm/status/1773841546753831283
In other words, they turn you into a reverse-centaur. Whereas my blind-spot double-checking robot allows me to make maneuvers at human speed and points out the things I've missed, a "supervised" self-driving car makes maneuvers at a computer's frantic pace, and demands that its human supervisor tirelessly and perfectly assesses each of those maneuvers. No wonder Cruise's murderous "self-driving" taxis replaced each low-waged driver with 1.5 high-waged technical robot supervisors:
https://pluralistic.net/2024/01/11/robots-stole-my-jerb/#computer-says-no
AI radiology programs are said to be able to spot cancerous masses that human radiologists miss. A centaur-based AI-assisted radiology program would keep the same number of radiologists in the field, but they would get less done: every time they assessed an X-ray, the AI would give them a second opinion. If the human and the AI disagreed, the human would go back and re-assess the X-ray. We'd get better radiology, at a higher price (the price of the AI software, plus the additional hours the radiologist would work).
But back to making the AI bubble pay off: for AI to pay off, the human in the loop has to reduce the costs of the business buying an AI. No one who invests in an AI company believes that their returns will come from business customers to agree to increase their costs. The AI can't do your job, but the AI salesman can convince your boss to fire you and replace you with an AI anyway – that pitch is the most successful form of AI disinformation in the world.
An AI that "hallucinates" bad advice to fliers can't replace human customer service reps, but airlines are firing reps and replacing them with chatbots:
https://www.bbc.com/travel/article/20240222-air-canada-chatbot-misinformation-what-travellers-should-know
An AI that "hallucinates" bad legal advice to New Yorkers can't replace city services, but Mayor Adams still tells New Yorkers to get their legal advice from his chatbots:
https://arstechnica.com/ai/2024/03/nycs-government-chatbot-is-lying-about-city-laws-and-regulations/
The only reason bosses want to buy robots is to fire humans and lower their costs. That's why "AI art" is such a pisser. There are plenty of harmless ways to automate art production with software – everything from a "healing brush" in Photoshop to deepfake tools that let a video-editor alter the eye-lines of all the extras in a scene to shift the focus. A graphic novelist who models a room in The Sims and then moves the camera around to get traceable geometry for different angles is a centaur – they are genuinely offloading some finicky drudgework onto a robot that is perfectly attentive and vigilant.
But the pitch from "AI art" companies is "fire your graphic artists and replace them with botshit." They're pitching a world where the robots get to do all the creative stuff (badly) and humans have to work at robotic pace, with robotic vigilance, in order to catch the mistakes that the robots make at superhuman speed.
Reverse centaurism is brutal. That's not news: Charlie Chaplin documented the problems of reverse centaurs nearly 100 years ago:
https://en.wikipedia.org/wiki/Modern_Times_(film)
As ever, the problem with a gadget isn't what it does: it's who it does it for and who it does it to. There are plenty of benefits from being a centaur – lots of ways that automation can help workers. But the only path to AI profitability lies in reverse centaurs, automation that turns the human in the loop into the crumple-zone for a robot:
https://estsjournal.org/index.php/ests/article/view/260
If you'd like an essay-formatted version of this post to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
https://pluralistic.net/2024/04/01/human-in-the-loop/#monkey-in-the-middle
Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg
CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en
--
Jorge Royan (modified) https://commons.wikimedia.org/wiki/File:Munich_-_Two_boys_playing_in_a_park_-_7328.jpg
CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0/deed.en
--
Noah Wulf (modified) https://commons.m.wikimedia.org/wiki/File:Thunderbirds_at_Attention_Next_to_Thunderbird_1_-_Aviation_Nation_2019.jpg
CC BY-SA 4.0 https://creativecommons.org/licenses/by-sa/4.0/deed.en
#pluralistic#ai#supervised ai#humans in the loop#coding assistance#ai art#fully automated luxury communism#labor
379 notes
·
View notes
Note
Hello! First, I wanted to say thank you for your post about updating software and such. I really appreciated your perspective as someone with ADHD. The way you described your experiences with software frustration was IDENTICAL to my experience, so your post made a lot of sense to me.
Second, (and I hope my question isn't bothering you lol) would you mind explaining why it's important to update/adopt the new software? Like, why isn't there an option that doesn't involve constantly adopting new things? I understand why they'd need to fix stuff like functional bugs/make it compatible with new tech, but is it really necessary to change the user side of things as well?
Sorry if those are stupid questions or they're A Lot for a tumblr rando to ask, I'd just really like to understand because I think it would make it easier to get myself to adopt new stuff if I understand why it's necessary, and the other folks I know that know about computers don't really seem to understand the experience.
Thank you so much again for sharing your wisdom!!
A huge part of it is changing technologies and changing norms; I brought up Windows 8 in that other post and Win8 is a *great* example of user experience changing to match hardware, just in a situation that was an enormous mismatch with the market.
Win8's much-beloathed tiles came about because Microsoft seemed to be anticipating a massive pivot to tablet PCs in nearly all applications. The welcome screen was designed to be friendly to people who were using handheld touchscreens who could tap through various options, and it was meant to require more scrolling and less use of a keyboard.
But most people who the operating system went out to *didn't* have touchscreen tablets or laptops, they had a desktop computer with a mouse and a keyboard.
When that was released, it was Microsoft attempting to keep up with (or anticipate) market trends - they wanted something that was like "the iPad for Microsoft" so Windows 8 was meant to go with Microsoft Surface tablets.
We spent the first month of Win8's launch making it look like Windows 7 for our customers.
You can see the same thing with the centered taskbar on Windows 11; that's very clearly supposed to mimic the dock on apple computers (only you can't pin it anywhere but the bottom of the screen, which sucks).
Some of the visual changes are just trends and various companies trying to keep up with one another.
With software like Adobe I think it's probably based on customer data. The tool layout and the menu dropdowns are likely based on what people are actually looking for, and change based on what other tools people are using. That's likely true for most programs you use - the menu bar at the top of the screen in Word is populated with the options that people use the most; if a function you used to click on all the time is now buried, there's a possibility that people use it less these days for any number of reasons. (I'm currently being driven mildly insane by Teams moving the "attach file" button under a "more" menu instead of as an icon next to the "send message" button, and what this tells me is either that more users are putting emojis in their messages than attachments, or microsoft WANTS people to put more emojis than messages in their attachments).
But focusing on the operating system, since that's the big one:
The thing about OSs is that you interact with them so frequently that any little change seems massive and you get REALLY frustrated when you have to deal with that, but version-to-version most OSs don't change all that much visually and they also don't get released all that frequently. I've been working with windows machines for twelve years and in that time the only OSs that Microsoft has released were 8, 10, and 11. That's only about one OS every four years, which just is not that many. There was a big visual change in the interface between 7 and 8 (and 8 and 8.1, which is more of a 'panicked backing away' than a full release), but otherwise, realistically, Windows 11 still looks a lot like XP.

The second one is a screenshot of my actual computer. The only change I've made to the display is to pin the taskbar to the left side instead of keeping it centered and to fuck around a bit with the colors in the display customization. I haven't added any plugins or tools to get it to look different.
This is actually a pretty good demonstration of things changing based on user behavior too - XP didn't come with a search field in the task bar or the start menu, but later versions of Windows OSs did, because users had gotten used to searching things more in their phones and browsers, so then they learned to search things on their computers.
There are definitely nefarious reasons that software manufacturers change their interfaces. Microsoft has included ads in home versions of their OS and pushed searches through the Microsoft store since Windows 10, as one example. That's shitty and I think it's worthwhile to find the time to shut that down (and to kill various assistants and background tools and stop a lot of stuff that runs at startup).
But if you didn't have any changes, you wouldn't have any changes. I think it's handy to have a search field in the taskbar. I find "settings" (which is newer than control panel) easier to navigate than "control panel." Some of the stuff that got added over time is *good* from a user perspective - you can see that there's a little stopwatch pinned at the bottom of my screen; that's a tool I use daily that wasn't included in previous versions of the OS. I'm glad it got added, even if I'm kind of bummed that my Windows OS doesn't come with Spider Solitaire anymore.
One thing that's helpful to think about when considering software is that nobody *wants* to make clunky, unusable software. People want their software to run well, with few problems, and they want users to like it so that they don't call corporate and kick up a fuss.
When you see these kinds of changes to the user experience, it often reflects something that *you* may not want, but that is desirable to a *LOT* of other people. The primary example I can think of here is trackpad scrolling direction; at some point it became common for trackpads to scroll in the opposite direction that they used to; now the default direction is the one that feels wrong to me, because I grew up scrolling with a mouse, not a screen. People who grew up scrolling on a screen seem to feel that the new direction is a lot more intuitive, so it's the default. Thankfully, that's a setting that's easy to change, so it's a change that I make every time I come across it, but the change was made for a sensible reason, even if that reason was opaque to me at the time I stumbled across it and continues to irritate me to this day.
I don't know. I don't want to defend Windows all that much here because I fucking hate Microsoft and definitely prefer using Linux when I'm not at work or using programs that I don't have on Linux. But the thing is that you'll see changes with Linux releases as well.
I wouldn't mind finding a tool that made my desktop look 100% like Windows 95, that would be fun. But we'd probably all be really frustrated if there hadn't been any interface improvements changes since MS-DOS (and people have DEFINITELY been complaining about UX changes at least since then).
Like, I talk about this in terms of backward compatibility sometimes. A lot of people are frustrated that their old computers can't run new software well, and that new computers use so many resources. But the flipside of that is that pretty much nobody wants mobile internet to work the way that it did in 2004 or computers to act the way they did in 1984.
Like. People don't think about it much these days but the "windows" of the Windows Operating system represented a massive change to how people interacted with their computers that plenty of people hated and found unintuitive.
(also take some time to think about the little changes that have happened that you've appreciated or maybe didn't even notice. I used to hate the squiggly line under misspelled words but now I see the utility. Predictive text seems like new technology to me but it's really handy for a lot of people. Right clicking is a UX innovation. Sometimes you have to take the centered task bar in exchange for the built-in timer deck; sometimes you have to lose color-coded files in exchange for a right click.)
296 notes
·
View notes
Text
Writing Advice #?: Don’t write out accents.
The Surface-Level Problem: It’s distracting at best, illegible at worst.
The following passage from Sons and Lovers has never made a whit of sense to me:
“I ham, Walter, my lad,’ ’e says; ‘ta’e which on ’em ter’s a mind.’ An’ so I took one, an’ thanked ’im. I didn’t like ter shake it afore ’is eyes, but ’e says, ‘Tha’d better ma’e sure it’s a good un. An’ so, yer see, I knowed it was.’”
There’s almost certainly a point to that dialogue — plot, character, theme — but I could not figure out what the words were meant to be, and gave up on the book. At a lesser extreme, most of Quincey’s lines from Dracula (“I know I ain’t good enough to regulate the fixin’s of your little shoes”) cause American readers to sputter into laughter, which isn’t ideal for a character who is supposed to be sweet and tragic. Accents-written-out draw attention to mechanical qualities of the text.
Solution #1: Use indicators outside of the quote marks to describe how a character talks. An Atlanta accent can be “drawling” and a London one “clipped”; a Princeton one can sound “stiff” and a Newark one “relaxed.” Do they exaggerate their vowels more (North America) or their consonants more (U.K., north Africa)? Do they sound happy, melodious, frustrated?
The Deeper Problem: It’s ignorant at best, and classist/racist/xenophobic at worst.
You pretty much never see authors writing out their own accents — to the person who has the accent, the words just sound like words. It’s only when the accent is somehow “other” to the author that it gets written out.
And the accents that we consider “other” and “wrong” (even if no one ever uses those words, the decision to deliberately misspell words still conveys it) are pretty much never the ones from wealthy and educated parts of the country. Instead, the accents with misspelled words and awkward inflection are those from other countries, from other social classes, from other ethnicities. If your Maine characters speak normally and your Florida characters have grammatical errors, then you have conveyed what you consider to be correct and normal speech. We know what J.K. Rowling thinks of French-accented English, because it’s dripping off of Fleur Delacour’s every line.
At the bizarre extreme, we see inappropriate application of North U.K. and South U.S.-isms to every uneducated and/or poor character ever to appear in fan fic. When wanting to get across that Steve Rogers is a simple Brooklyn boy, MCU fans have him slip into “mustn’t” and “we is.” When conveying that Robin 2.0 is raised poor in Newark, he uses “ain’t” and “y’all” and “din.” Never mind that Iron Man is from Manhattan, or that Robin 3.0 is raised wealthy in Newark; neither of them ever gets a written-out accent.
Solution #2: A little word choice can go a long way, and a little research can go even further. Listen carefully to the way people talk — on the bus, in a café, on unscripted YouTube — and write down their exact word choice. “We good” literally means the same thing as “no thank you,” but one’s a lot more formal than the other. “Ain’t” is a perfectly good synonym for “am not,” but not everyone will use it.
The Obscure Problem: It’s not even how people talk.
Look at how auto-transcription software messes up speaking styles, and it’s obvious that no one pronounces every spoken sound in every word that comes out of their mouth. Consider how Americans say “you all right?”; 99% of us actually say something like “yait?”, using tone and head tilt to convey meaning. Politicians speak very formally; friends at bars speak very informally.
An example: I’m from Baltimore, Maryland. Unless I’m speaking to an American from Texas, in which case I’m from “Baltmore, Marlind.” Unless I’m speaking to an American from Pennsylvania, in which case I’m from “Balmore, Marlin.” If I’m speaking to a fellow Marylander, I’m of course from “Bamor.” (If I’m speaking to a non-American, I’m of course from “Washington D.C.”) Trying to capture every phoneme of change from moment to moment and setting to setting would be ridiculous; better just to say I inflect more when talking to people from outside my region.
When you write out an accent, you insert yourself, the writer, as an implied listener. You inflict your value judgments and your linguistic ear on the reader, and you take away from the story.
Solution #3: When in doubt, just write the dialogue how you would talk.
#writing#writing advice#accents#fan fiction#classism#language#u.s.-centric af because I've only lived so many places
1K notes
·
View notes
Text
I typed out these messages in a discord server a moment ago, and then thought "hmm, maybe I should make the same points in a tumblr post, since I've been talking about software-only-singularity predictions on tumblr lately"
But, as an extremely lazy (and somewhat busy) person, I couldn't be bothered to re-express the same ideas in a tumblr-post-like format, so I'm giving you these screenshots instead
(If you're not familiar, "MCP" is "Model Context Protocol," a recently introduced standard for connections between LLMs and applications that want to interact with LLMs. Its official website is here – although be warned, that link leads to the bad docs I complained about in the first message. The much more palatable python SDK docs can be found here.)
EDIT: what I said in the first message about "getting Claude to set things up for you locally" was not really correct, I was conflating this (which fits that description) with this and this (which are real quickstarts with code, although not very good ones, and frustratingly there's no end-to-end example of writing a server and then testing it with a hand-written client or the inspector, as opposed to using with "Claude for Desktop" as the client)
63 notes
·
View notes
Text
How to Buy a Computer for Cheaper
Buy refurbished. And I'm going to show you how, and, in general, how to buy a better computer than you currently have. I'm fairly tech-knowledgeable, but not an expert. But this is how I've bought my last three computers for personal use and business (graphics). I'm writing this for people who barely know computers. If you have a techie friend or family member, having them help can do a lot for the stress of buying a new computer.
There are three numbers you want to know from your current computer: hard drive size, RAM, and processor speed (slightly less important, unless you're doing gaming or 3d rendering or something else like that)
We're going to assume you use Windows, because if you use Apple I can't help, sorry.
First is hard drive. This is how much space you have to put files. This is in bytes. These days all hard drives are in gigabytes or terabytes (1000 gigabytes = 1 terabyte). To get your hard drive size, open Windows Explorer, go to This PC (or My Computer if you have a really old OS).
To get more details, you can right-click on the drive. and open Properties. But now you know your hard drive size, 237 GB in this case. (this is rather small, but that's okay for this laptop). If you're planning on storing a lot of videos, big photos, have a lot of applications, etc, you want MINIMUM 500 GB. You can always have external drives as well.
While you've got this open, right-click on This PC (or My Computer). This'll give you a lot of information that can be useful if you're trying to get tech support.
I've underlined in red the two key things. Processor: it can help to know the whole bit (or at least the Intel i# bit) just so you don't buy one that's a bunch older, but processor models are confusing and beyond me. The absolutely important bit is the speed, in gigahertz (GHz). Bigger is faster. The processor speed is how fast your computer can run. In this case the processor is 2.60 GHz, which is just fine for most things.
The other bit is RAM. This is "random-access memory" aka memory, which is easy to confuse for, like how much space you have. No. RAM is basically how fast your computer can open stuff. This laptop has 16 GB RAM. Make sure you note that this is the RAM, because it and the hard drive use the same units.
If you're mostly writing, use spreadsheets, watching streaming, or doing light graphics work 16 GB is fine. If you have a lot of things open at a time or gaming or doing 3d modeling or digital art, get at least 32 GB or it's gonna lag a lot.
In general, if you find your current laptop slow, you want a new one with more RAM and a processor that's at least slightly faster. If you're getting a new computer to use new software, look at the system requirements and exceed them.
I'll show you an example of that. Let's say I wanted to start doing digital art on this computer, using ClipStudio Paint. Generally the easiest way to find the requirements is to search for 'program name system' in your search engine of choice. You can click around their website if you want, but just searching is a lot faster.
That gives me this page
(Clip Studio does not have very heavy requirements).
Under Computer Specs it tells you the processor types and your RAM requirements. You're basically going to be good for the processor, no matter what. That 2 GB minimum of memory is, again, the RAM.
Storage space is how much space on your hard drive it needs.
Actually for comparison, let's look at the current Photoshop requirements.
Photoshop wants LOTS of speed and space, greedy bastard that it is. (The Graphics card bit is somewhat beyond my expertise, sorry)
But now you have your three numbers: hard drive space, RAM (memory) and processor (CPU). Now we're going to find a computer that's better and cheaper than buying new!
We're going to buy ~refurbished~
A refurbished computer is one that was used and then returned and fixed up to sell again. It may have wear on the keyboard or case, but everything inside (aside from the battery) should be like new. (The battery may hold less charge.) A good dealer will note condition. And refurbished means any flaws in the hardware will be fixed. They have gone through individual quality control that new products don't usually.
I've bought four computers refurbished and only had one dud (Windows kept crashing during set-up). The dud has been returned and we're waiting for the new one.
You can buy refurbished computers from the manufacturers (Lenovo, Dell, Apple, etc) or from online computer stores (Best Buy and my favorite Newegg). You want to buy from a reputable store because they'll have warranties offered and a good return policy.
I'm going to show you how to find a refurbished computer on Newegg.
You're going to go to Newegg.com, you're gonna go to computer systems in their menu, and you're gonna find refurbished
Then, down the side there's a ton of checkboxes where you can select your specifications. If there's a brand you prefer, select that (I like Lenovos A LOT - they last a long time and have very few problems, in my experience. Yes, this is a recommendation).
Put in your memory (RAM), put in your hard drive, put in your CPU speed (processor), and any other preferences like monitor size or which version of Windows you want (I don't want Windows 11 any time soon). I generally just do RAM and hard drive and manually check the CPU, but that's a personal preference. Then hit apply and it'll filter down.
I'm going to say right now, if you are getting a laptop and you can afford to get a SSD, do it. SSD is a solid-state drive, vs a normal hard drive (HDD, hard disk-drive). They're less prone to breaking down and they're faster. But they're also more expensive.
Anyway, we have our filtered list of possible laptops. Now what?
Well, now comes the annoying part. Every model of computer can be different - it can have a better or worse display, it can have a crappy keyboard, or whatever. So you find a computer that looks okay, and you then look for reviews.
Here's our first row of results
Let's take a look at the Lenovo, because I like Lenovos and I loathe Dells (they're... fine...). That Thinkpad T460S is the part to Google (search for 'Lenovo Thinkpad T460s reviews'). Good websites that I trust include PCMag, LaptopMag.com, and Notebookcheck.com (which is VERY techie about displays). But every reviewer will probably be getting one with different specs than the thing you're looking at.
Here are key things that will be the same across all of them: keyboard (is it comfortable, etc), battery life, how good is the trackpad/nub mouse (nub mice are immensely superior to trackpads imho), weight, how many and what kind of ports does it have (for USB, an external monitor, etc). Monitors can vary depending on the specs, so you'll have to compare those. Mostly you're making sure it doesn't completely suck.
Let's go back to Newegg and look at the specs of that Lenovo. Newegg makes it easy, with tabs for whatever the seller wants to say, the specs, reviews, and Q&A (which is usually empty).
This is the start of the specs. This is actually a lesser model than the laptop we were getting the specs for. It's okay. What I don't like is that the seller gives very little other info, for example on condition. Here's a Dell with much better information - condition and warranty info.
One thing you'll want to do on Newegg is check the seller's reviews. Like on eBay or Etsy, you have to use some judgement. If you worry about that, going to the manufacturer's online outlet in a safer bet, but you won't quite get as good of deals. But they're still pretty damn good as this random computer on Lenovo's outlet shows.
Okay, so I think I've covered everything. I do recommend having a techie friend either help or double check things if you're not especially techie. But this can save you hundreds of dollars or allow you to get a better computer than you were thinking.
992 notes
·
View notes
Text
TOPAZ AI TUTORIAL
i was asked to do a tutorial for Topaz AI (a software that enhances screencaps), so here it is! :)
[tutorial under the cut]
i’m going to gif a 720p YouTube video from 12 years ago as an example. it’s the bottom of the barrel when it comes to image quality, but in the end, you won’t believe it was once so shitty. here’s the gif, without any editing:
THE APPLICATION
Topaz AI is a paid software for image enhancement. you can download it for free, but your images will have watermarks. here's a random link that has nothing to do with this tutorial.
you can use Topaz AI as a Photoshop plugin or use the software separately. i will explain both methods in this tutorial.
USING SEPARATELY
it’s the way i do it because it’s more computer-friendly, the plugin can take a toll on your PC, especially when you’re dealing with a lot of screencaps.
you first take screencaps as you normally would (if you don’t, here’s a tutorial on how to do it). open Topaz AI and select all the images. wait a while for the software to do its thing.
on the left, there is your screencap untouched. on the right, is your edited version. if you click the edited screencap and hold, Topaz will show you the original, that way you can compare the versions even better than just looking at them side by side.
Topaz AI will automatically recognize faces, if any, and enhance them. this can be toggled off, by disabling the “recovering faces” option in the right panel. it’s always on for me, though. you can tweak this feature by clicking on its name, the same thing for the others.
Topaz AI will also automatically upscale your screencaps if they’re too small (less than 4k). it will upscale them to achieve said 4k (in this gif’s case, the original 1280x720 screencaps became 4621x2599). i suggest that you let the app upscale those images, giving you more gif size flexibility. you can change into whatever size you want if you want something less heavy to store. don’t worry though, even these “4k screencaps” are very light megabytes-wise, so you won’t need a supercomputer. it might take a while to render all your screencaps, though, if you’re on a lower-end computer. (the folder with the edited screencaps ended up being 1GB, but that’s because it contains 123 screencaps, which is a lot of screencaps for 4k giffing).
two options won’t be automatically selected, Remove Noise and Sharpening, you will need to enable them to use them. rarely i don’t use Remove Noise, as is the best tool to remove pixelization. the Sharpening option depends on the gif, sometimes your gif will end up too over-sharpened (because of Topaz’s sharpening and later your own). that said, i used the Sharpening option on this gif.
next, select all images by clicking the “select all” button. you will notice that one of the screencaps’s thumbnails (in my case, the first one) will have small icons the others don’t have. this is the screencap you enhanced. you will need to click the dots menu, select “apply”, and then click “apply current settings to selected images”. this way, every screencap will have the same settings. if you don’t do this step, you will end up with one edited screencap and the rest will remain untouched!
all things done, click “save X images”. in the next panel, you can select where to save your new screencaps and how you want to name them. i always choose to add a topaz- prefix so i know what files i’m dealing with while giffing.
just a note: if your way of uploading screencaps to Photoshop is through image sequence, you will need to change the names of your new screencaps so PS can perceive that as a sequence (screencap1, screencap2, etc). you can do that by selecting all the screencaps in your folder, then selecting to rename just one of them and the rest will receive numbers at the end, from first to last. you don’t need to rename them one by one.
here’s the first gif again, without any editing:
without Topaz enhancement but with sharpening:
without sharpening, only the Topaz enhancement:
with Topaz enhancement and sharpening:
her skin is so smooth that it is a bit unrealistic. i could have edited that while tweaking the “Recovering Faces” option and/or the “Remove Noise” option, but i prefer to add noise (filter > noise > add noise) when necessary. this way, i don’t risk not enhancing the quality of the screencaps enough.
i added +3 of noise, making the gif look more natural. it’s a subtle difference, but i thought it necessary one in this case. you can continue to edit your gif as your heart desires.
VOILA! 🥳
AS A PHOTOSHOP PLUGIN
if you have Topaz AI installed on your computer, Photoshop will recognize it. you will find it in filter > Topaz Labs > Topaz AI. while in timeline mode, select the filter. the same Topaz AI window will pop up and you can tweak things the same way you do when you use the software separately. by using the plugin, you don’t need to upload your edited screencaps or use screencaps at all, a video clip (turned into a Smart Layer, that is) will suffice. the downside is that for every little thing you do, Topaz AI will recalculate stuff, so you practically can’t do anything without facing a waiting screen. a solution for that is to edit your gif in shitty quality as you would edit an HD one and at the very end, you enable Topaz AI. or just separately edit the screencaps following the first method.
this is it! it's a very simple software to use. the only downside is that it can take a while to render all screencaps, even with a stronger computer, but nothing too ridiculous.
any questions, feel free to contact me! :)
#*#alielook#usershreyu#userlaro#userchibi#tusernath#usersanshou#userbunneis#userzil#tuserlou#jokerous#usersnat#userdavid#userbuckleys#userbarrow#gif tutorial#completeresources#ps help#resources#*tutorials
260 notes
·
View notes
Text
In India, many development companies design sports-related apps. Sports Application Development Apps have various functions such as scoring in real-time statistics. This type of app provides information about games and sporting clubs. If we invest in Sports applications then your businesses grow in the future. Helpfulinsightsolution Provides the best information in every blog.
Users can have a unique, interactive sports app on their gadgets as sports applications provide for exceptional user engagement. The apps have various functions such as scoring in real time, statistics, information on news and so many more about different games and sporting clubs. In addition, users can create their own settings and preferences. Other sports applications allow users to log on to social forums like blogs or Facebook pages and participate in other interactive elements like forming fans’ groups. Users get a chance to update themselves on the latest sports news as well as demonstrate their support for the team of their choice in real time.

#Sports application development course#Sports application development tools#Sports application development software#Sports application development examples#sports app ideas#sports game development
0 notes
Text
Ever since OpenAI released ChatGPT at the end of 2022, hackers and security researchers have tried to find holes in large language models (LLMs) to get around their guardrails and trick them into spewing out hate speech, bomb-making instructions, propaganda, and other harmful content. In response, OpenAI and other generative AI developers have refined their system defenses to make it more difficult to carry out these attacks. But as the Chinese AI platform DeepSeek rockets to prominence with its new, cheaper R1 reasoning model, its safety protections appear to be far behind those of its established competitors.
Today, security researchers from Cisco and the University of Pennsylvania are publishing findings showing that, when tested with 50 malicious prompts designed to elicit toxic content, DeepSeek’s model did not detect or block a single one. In other words, the researchers say they were shocked to achieve a “100 percent attack success rate.”
The findings are part of a growing body of evidence that DeepSeek’s safety and security measures may not match those of other tech companies developing LLMs. DeepSeek’s censorship of subjects deemed sensitive by China’s government has also been easily bypassed.
“A hundred percent of the attacks succeeded, which tells you that there’s a trade-off,” DJ Sampath, the VP of product, AI software and platform at Cisco, tells WIRED. “Yes, it might have been cheaper to build something here, but the investment has perhaps not gone into thinking through what types of safety and security things you need to put inside of the model.”
Other researchers have had similar findings. Separate analysis published today by the AI security company Adversa AI and shared with WIRED also suggests that DeepSeek is vulnerable to a wide range of jailbreaking tactics, from simple language tricks to complex AI-generated prompts.
DeepSeek, which has been dealing with an avalanche of attention this week and has not spoken publicly about a range of questions, did not respond to WIRED’s request for comment about its model’s safety setup.
Generative AI models, like any technological system, can contain a host of weaknesses or vulnerabilities that, if exploited or set up poorly, can allow malicious actors to conduct attacks against them. For the current wave of AI systems, indirect prompt injection attacks are considered one of the biggest security flaws. These attacks involve an AI system taking in data from an outside source—perhaps hidden instructions of a website the LLM summarizes—and taking actions based on the information.
Jailbreaks, which are one kind of prompt-injection attack, allow people to get around the safety systems put in place to restrict what an LLM can generate. Tech companies don’t want people creating guides to making explosives or using their AI to create reams of disinformation, for example.
Jailbreaks started out simple, with people essentially crafting clever sentences to tell an LLM to ignore content filters—the most popular of which was called “Do Anything Now” or DAN for short. However, as AI companies have put in place more robust protections, some jailbreaks have become more sophisticated, often being generated using AI or using special and obfuscated characters. While all LLMs are susceptible to jailbreaks, and much of the information could be found through simple online searches, chatbots can still be used maliciously.
“Jailbreaks persist simply because eliminating them entirely is nearly impossible—just like buffer overflow vulnerabilities in software (which have existed for over 40 years) or SQL injection flaws in web applications (which have plagued security teams for more than two decades),” Alex Polyakov, the CEO of security firm Adversa AI, told WIRED in an email.
Cisco’s Sampath argues that as companies use more types of AI in their applications, the risks are amplified. “It starts to become a big deal when you start putting these models into important complex systems and those jailbreaks suddenly result in downstream things that increases liability, increases business risk, increases all kinds of issues for enterprises,” Sampath says.
The Cisco researchers drew their 50 randomly selected prompts to test DeepSeek’s R1 from a well-known library of standardized evaluation prompts known as HarmBench. They tested prompts from six HarmBench categories, including general harm, cybercrime, misinformation, and illegal activities. They probed the model running locally on machines rather than through DeepSeek’s website or app, which send data to China.
Beyond this, the researchers say they have also seen some potentially concerning results from testing R1 with more involved, non-linguistic attacks using things like Cyrillic characters and tailored scripts to attempt to achieve code execution. But for their initial tests, Sampath says, his team wanted to focus on findings that stemmed from a generally recognized benchmark.
Cisco also included comparisons of R1’s performance against HarmBench prompts with the performance of other models. And some, like Meta’s Llama 3.1, faltered almost as severely as DeepSeek’s R1. But Sampath emphasizes that DeepSeek’s R1 is a specific reasoning model, which takes longer to generate answers but pulls upon more complex processes to try to produce better results. Therefore, Sampath argues, the best comparison is with OpenAI’s o1 reasoning model, which fared the best of all models tested. (Meta did not immediately respond to a request for comment).
Polyakov, from Adversa AI, explains that DeepSeek appears to detect and reject some well-known jailbreak attacks, saying that “it seems that these responses are often just copied from OpenAI’s dataset.” However, Polyakov says that in his company’s tests of four different types of jailbreaks—from linguistic ones to code-based tricks—DeepSeek’s restrictions could easily be bypassed.
“Every single method worked flawlessly,” Polyakov says. “What’s even more alarming is that these aren’t novel ‘zero-day’ jailbreaks—many have been publicly known for years,” he says, claiming he saw the model go into more depth with some instructions around psychedelics than he had seen any other model create.
“DeepSeek is just another example of how every model can be broken—it’s just a matter of how much effort you put in. Some attacks might get patched, but the attack surface is infinite,” Polyakov adds. “If you’re not continuously red-teaming your AI, you’re already compromised.”
57 notes
·
View notes
Text
So NFTgate has now hit tumblr - I made a thread about it on my twitter, but I'll talk a bit more about it here as well in slightly more detail. It'll be a long one, sorry! Using my degree for something here. This is not intended to sway you in one way or the other - merely to inform so you can make your own decision and so that you aware of this because it will happen again, with many other artists you know.
Let's start at the basics: NFT stands for 'non fungible token', which you should read as 'passcode you can't replicate'. These codes are stored in blocks in what is essentially a huge ledger of records, all chained together - a blockchain. Blockchain is encoded in such a way that you can't edit one block without editing the whole chain, meaning that when the data is validated it comes back 'negative' if it has been tampered with. This makes it a really, really safe method of storing data, and managing access to said data. For example, verifying that a bank account belongs to the person that says that is their bank account.
For most people, the association with NFT's is bitcoin and Bored Ape, and that's honestly fair. The way that used to work - and why it was such a scam - is that you essentially purchased a receipt that said you owned digital space - not the digital space itself. That receipt was the NFT. So, in reality, you did not own any goods, that receipt had no legal grounds, and its value was completely made up and not based on anything. On top of that, these NFTs were purchased almost exclusively with cryptocurrency which at the time used a verifiation method called proof of work, which is terrible for the environment because it requires insane amounts of electricity and computing power to verify. The carbon footprint for NFTs and coins at this time was absolutely insane.
In short, Bored Apes were just a huge tech fad with the intention to make a huge profit regardless of the cost, which resulted in the large market crash late last year. NFTs in this form are without value.
However, NFTs are just tech by itself more than they are some company that uses them. NFTs do have real-life, useful applications, particularly in data storage and verification. Research is being done to see if we can use blockchain to safely store patient data, or use it for bank wire transfers of extremely large amounts. That's cool stuff!
So what exactly is Käärijä doing? Kä is not selling NFTs in the traditional way you might have become familiar with. In this use-case, the NFT is in essence a software key that gives you access to a digital space. For the raffle, the NFT was basically your ticket number. This is a very secure way of doing so, assuring individuality, but also that no one can replicate that code and win through a false method. You are paying for a legimate product - the NFT is your access to that product.
What about the environmental impact in this case? We've thankfully made leaps and bounds in advancing the tech to reduce the carbon footprint as well as general mitigations to avoid expanding it over time. One big thing is shifting from proof of work verification to proof of space or proof of stake verifications, both of which require much less power in order to work. It seems that Kollekt is partnered with Polygon, a company that offers blockchain technology with the intention to become climate positive as soon as possible. Numbers on their site are very promising, they appear to be using proof of stake verification, and all-around appear more interested in the tech than the profits it could offer.
But most importantly: Kollekt does not allow for purchases made with cryptocurrency, and that is the real pisser from an environmental perspective. Cryptocurrency purchases require the most active verification across systems in order to go through - this is what bitcoin mining is, essentially. The fact that this website does not use it means good things in terms of carbon footprint.
But why not use something like Patreon? I can't tell you. My guess is that Patreon is a monthly recurring service and they wanted something one-time. Kollekt is based in Helsinki, and word is that Mikke (who is running this) is friends with folks on the team. These are all contributing factors, I would assume, but that's entirely an assumption and you can't take for fact.
Is this a good thing/bad thing? That I also can't tell you - you have to decide that for yourself. It's not a scam, it's not crypto, just a service that sits on the blockchain. But it does have higher carbon output than a lot of other services do, and its exact nature is not publicly disclosed. This isn't intended to sway you to say one or the other, but merely to give you the proper understanding of what NFTs are as a whole and what they are in this particular case so you can make that decision for yourself.
95 notes
·
View notes