#AI messaging for LinkedIn
Explore tagged Tumblr posts
Text
The #1 AI Hack to Get More LinkedIn Leads
The #1 AI Hack to Get More LinkedIn Leads If you’re still manually sending LinkedIn connection requests one by one… we need to talk. LinkedIn is a goldmine for high-quality leads, partnerships, and sales opportunities—but let’s be honest, the process of finding, messaging, and following up with potential clients is exhausting. That’s where AI steps in. What if LinkedIn could generate leads…
#AI business development#AI business networking#AI engagement for LinkedIn#AI follow-up automation#AI for B2B sales#AI for client acquisition#AI for LinkedIn connections#AI for LinkedIn sales#AI LinkedIn automation#AI LinkedIn follow-ups#AI LinkedIn pipeline#AI messaging for LinkedIn#AI outreach conversion optimization#AI prospecting tools#AI sales automation#AI sales prospecting#AI social selling#AI-driven lead generation#AI-driven LinkedIn marketing#AI-driven LinkedIn strategy#AI-powered lead nurturing#AI-powered LinkedIn outreach#AI-powered LinkedIn personalization#AI-powered networking#AI-powered sales outreach#automate LinkedIn messages#automated LinkedIn prospecting#LinkedIn AI engagement tracking#LinkedIn AI scrapers#LinkedIn AI tools
0 notes
Text
There was that scene in Kim's Convenience where the daughter is in her photography class and her lecturer is looking up her website, realises the first page isn't hers, the second is also not hers, she has to go to page 2 of google and at that point just throws down her hands and says at that point, as a potential customer, she already loses interest and gives up cause it's not worth the effort
and lately I just feel like the whole fucking internet feels like that
I want to look up how to use cricut stuff and what that even really is, what can I do with what but when I put their name in i get taken to the fucking shop with no explanations far and wide, then next link is also the shop, next link is ALSO the shop but different, and by the time I finally find a page that has any kind of explanation, i'm so annoyed that the hoops is makes me jump through THEN—e.g. selecting which topic I want to learn more about—I'm no longer interested in doing this shit
the other day I wanted to look up what Nokia is up to in terms of phones these days but they no longer have 1 coherent website. In general, many places seem to not want any coherence in their websites, or sub-menus that you can easily navigate
Like, I come from myspace. I know how to navigate the internet. I played WoW in days of dial-up internet. And yet, everything is so goddamn convoluted and incoherent, there is NO structure or logic to anything and on top of that, google, and with it most other search engines, are fucking fried! A few years ago, if a website was really badly designed, you could just navigate back, google the website + search term you needed and get there somehow, but now that is also useless more often than not!
At this point I am genuinely over the internet. We had a good 15 years with it, let's pack it up.
#technology#rant#google#internet#I would PREFER myspace days#on wordpress many things are no longer possible#that I want to do#but they are like 'users can only click and drag and we do not trust them with that power'#between this and AI I am sick of it#genuinely i don't think these people understand how much they are not worth my time#just so I can give them money#paypal too!!!!#someone sent me a msg on there didn't know that was possible but cool#so I checked my messages but haha no that's where paypal msgs go#as in letters from paypal#not messages ON paypal#so I check my payments since it was in response to that but ALSO no#at that point I already wanted to shake whoever made that fucked up choice#cause already I'm tired of it#it should be either in my messages or on my dashboard with the payment it belongs to#y'all don't know what you are doing get outttt#I should do two things:#a) start a 'learn everything' discord server with anyone who wants in#we will teach each other everything we want#fuck khan academy I'm doing this now#b) go on linkedin and promote myself as consultant for 1000 things#app design/user interface consultant#communcation consultant too#writing consultant! Social media consultant!#if it makes me want to yell at you I will yell at you
3 notes
·
View notes
Text

Leverage the power of AI-personalized messaging to take your LinkedIn and email campaigns to new heights! Quicklead's advanced tool automates the creation of highly tailored messages that resonate with each prospect, increasing engagement and response rates.
0 notes
Text
Rather than use the revenue from Duolingo’s nearly 10 million paid subscribers to hire more experts to create–I cannot stress this enough–learning content that requires subject matter expertise, [CEO] von Ahn seems to believe that quantity is the top goal, and that its users deserve a flood of what even he admits could be shit. This certainly squares with my own experience of Duolingo, which employs every cheap gamification trick in the book to simply keep me in the app and push me to pay. On LinkedIn, Duolingo’s chief engineering officer Natalie Glance shared a Slack message she wrote to the engineering team explaining how AI will fit into their work. This message features new directives such as “AI will be everywhere in our product” and “productivity expectations will rise.” Here’s my personal favorite: Start with AI for every task. No matter how small, try using an AI tool first. It won’t always be faster or better at first–but that’s how you build skill. Don’t give up if the first result is wrong. Again, these people are literally admitting AI won’t work, but are still asking engineers now laboring under heightened productivity expectations to spend their even-more-precious time managing this sub-par tech. "That's how you build skill" puts the AI tools' failing back on the user, the "have fun staying poor" of the AI world, and asks engineers to troubleshoot AI when they could instead be using their real live human skill to do the jobs they were hired for.
29 April 2025
162 notes
·
View notes
Text
A pro-Israel “surveillance network” that has offered bounties for information on pro-Palestinian protesters is establishing a foothold in Australia and claims to have secured meetings with key federal politicians, leaked messages show.
Shirion Collective, which has largely focused on the US and UK, boasts of its ability to scrape digital fingerprints to “aggressively track and expose antisemites”. It is one of a number of groups that have gained prominence on social media during the Israel-Gaza war, publicly naming individuals it accuses of being antisemitic.
Shirion Collective claims it has an AI tool called Maccabee which can identify and track targets.
In one post on X, Shirion outlines a scenario in which the tool creates and releases deepfake videos – falsified content that looks and sounds genuine – to embarrass individuals who take down posters picturing Israeli hostages.
On its X account, Shirion Collective has claimed to offer bounties of US$500 for information on people in videos. In a December post it claimed it would pay up to US$15,000 for “crucial insights” about politicians, US$7,500 for medical doctors and US$250 for students.
Leaked screenshots of Shirion’s Telegram channel, shared with Guardian Australia by the White Rose Society, an anti-fascist researchgroup, show Shirion has become active in Australia, with participants identifying potential targets and boasting of attempts to meet the home affairs minister, Clare O’Neil, and the shadow home affairs minister, James Paterson.
Anonymised Shirion members discussed presenting O’Neil and Paterson with a list of names to ensure they were “brought to justice according to the rule of law”.
“Need help. We managed to get into home affairs calendar, need to come prepared with people with hate speech and names that the government didn’t held [sic] accountable,” one anonymous user said.
“Meeting with Clair [sic] or her stuff [sic] … we also have a meeting with the shadow minister.”
Both O’Neil and Paterson’s offices said they had not met anyone who identified themselves as part of Shirion Collective.
The leaked texts show people on the Shirion channel discussed adding the names of individuals to a “watch list” and mass reporting posts on social media.
Some Australians whose social media accounts were linkedin the channel had shared antisemitic, racist and conspiracy theory content on social media. Others were pro-Palestinian activists who do not appear to have posted or shared antisemitic content.
When contacted via its social media accounts, a Shirion member describing themself as the “social media guy” said the “Ai is a quiet project with an internal team”.
The Shirion member said “bounties were for info and was in the USA not Australia”. The member said Shirion’s Telegram channel was open.
“The telegram [sic] is open and we do a soft verification that people are real. But freedom of speech is welcome there,” the Shirion member said.
The member said they would refer Guardian Australia’s questions to a “commander” but no further response was received.
Shirion Collective is one of several groups that say they track and fight antisemitism, largely through identifying individuals online.
Canary Mission, which has been operating since at least 2015, maintains lists of students, professors and other individuals on its website who it claims “promote hatred of the USA, Israel and Jews”. Another prominent account on X, StopAntisemitism, shares the names and employers or academic institutions of individuals, and often directs its more than 298,000 followers where to make complaints.
The leaked posts from the Shirion Collective Telegram channel point to some publicly available material its contributors regard as antisemitic, but also discuss creating “infiltrator” accounts to view and share material from private Instagram accounts.
In the leaked posts seen by Guardian Australia, contributors do not reveal personally identifiable information about any individual that is not publicly available.
The Shirion Collective account on X/Twitter has identified people it alleges have posted antisemitic material, or statements in support of Hamas, and tagged in their employer or academic institution in the case of students.
Naming someone online is not necessarily illegal, but Michael Bradley, a managing partner at Marque Lawyers, warned there were potential implications depending on the nature of the claims, such as harassment and intimidation or even racial vilification.
“Using social media as a mechanism for coalescing groups that want to engage in doxing activity, it’s obviously extremely powerful,” he said.
Last month, a Sydney resident named Theo had a picture of his house and his street address posted to a Facebook group.
Theo, who asked that his surname not be used, had raised a Palestinian flag and placed a blackboard with messages critical of Israel in front of his Botany home.
Less than two weeks later, a jerry can with rags stuffed into it, a disposable lighter and large bolts were placed on the bonnet of his car with a message that read: “Enough! Take down flag! One chance!!!!”
The incident prompted the deployment of the bomb squad and local police.
The investigation has not been transferred to the counter-terror investigators and remains with local police.
also

@huzni @el-shab-hussein @dirhwangdaseul
303 notes
·
View notes
Text
“I don’t want to connect my coffee machine to the Wifi network. I don’t want to share the file with OneDrive. I don’t want to download an app to check my car’s fluid levels. I don’t want to scan a QR code to view the restaurant menu. I don’t want to let Google know my location before showing me the search results. I don’t want to include a Teams link on the calendar invite. I don’t want to pay 50 different monthly subscription fees for all my software. I don’t want to upgrade to TurboTax platinum plus audit protection. I don’t want to install the Webex plugin to join the meeting. I don’t want to share my car’s braking data with the actuaries at State Farm. I don’t want to text with your AI chatbot. I don’t want to download the Instagram app to look at your picture. I don’t want to type in my email address to view the content on your company’s website. I don’t want text messages with promo codes. I don’t want to leave your company a five-star Google review in exchange for the chance to win a $20 Starbucks gift card. I don’t want to join your exclusive community in the metaverse. I don’t want AI to help me write my comments on LinkedIn. I don’t even want to be on LinkedIn in the first place. I just want to pay for a product one time (and only one time), know that it’s going to work flawlessly, press 0 to speak to an operator if I need help, and otherwise be left alone and treated with some small measure of human dignity, if that’s not too much to ask anymore.” ~ Robert Sterling

48 notes
·
View notes
Text
These days, when Nicole Yelland receives a meeting request from someone she doesn’t already know, she conducts a multi-step background check before deciding whether to accept. Yelland, who works in public relations for a Detroit-based non-profit, says she’ll run the person’s information through Spokeo, a personal data aggregator that she pays a monthly subscription fee to use. If the contact claims to speak Spanish, Yelland says, she will casually test their ability to understand and translate trickier phrases. If something doesn’t quite seem right, she’ll ask the person to join a Microsoft Teams call—with their camera on.
If Yelland sounds paranoid, that’s because she is. In January, before she started her current non-profit role, Yelland says she got roped into an elaborate scam targeting job seekers. “Now, I do the whole verification rigamarole any time someone reaches out to me,” she tells WIRED.
Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds.
On LinkedIn, it can be hard to distinguish a slightly touched-up headshot of a real person from a too-polished, AI-generated facsimile. Deepfake videos are getting so good that longtime email scammers are pivoting to impersonating people on live video calls. According to the US Federal Trade Commission, reports of job and employment related scams nearly tripled from 2020 to 2024, and actual losses from those scams have increased from $90 million to $500 million.
Yelland says the scammers that approached her back in January were impersonating a real company, one with a legitimate product. The “hiring manager” she corresponded with over email also seemed legit, even sharing a slide deck outlining the responsibilities of the role they were advertising. But during the first video interview, Yelland says, the scammers refused to turn their cameras on during a Microsoft Teams meeting and made unusual requests for detailed personal information, including her driver’s license number. Realizing she’d been duped, Yelland slammed her laptop shut.
These kinds of schemes have become so widespread that AI startups have emerged promising to detect other AI-enabled deepfakes, including GetReal Labs, and Reality Defender. OpenAI CEO Sam Altman also runs an identity-verification startup called Tools for Humanity, which makes eye-scanning devices that capture a person’s biometric data, create a unique identifier for their identity, and store that information on the blockchain. The whole idea behind it is proving “personhood,” or that someone is a real human. (Lots of people working on blockchain technology say that blockchain is the solution for identity verification.)
But some corporate professionals are turning instead to old-fashioned social engineering techniques to verify every fishy-seeming interaction they have. Welcome to the Age of Paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a timestamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off.
“What’s funny is, the low-fi approach works,” says Daniel Goldman, a blockchain software engineer and former startup founder. Goldman says he began changing his own behavior after he heard a prominent figure in the crypto world had been convincingly deepfaked on a video call. “It put the fear of god in me,” he says. Afterwards, he warned his family and friends that even if they hear what they believe is his voice or see him on a video call asking for something concrete—like money or an internet password—they should hang up and email him first before doing anything.
Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their resume, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details.
Another verification tactic some people use, Schumacher says, is what he calls the “phone camera trick.” If someone suspects the person they’re talking to over video chat is being deceitful, they can ask them to hold up their phone camera to their laptop. The idea is to verify whether the individual may be running deepfake technology on their computer, obscuring their true identity or surroundings. But it’s safe to say this approach can also be off-putting: Honest job candidates may be hesitant to show off the inside of their homes or offices, or worry a hiring manager is trying to learn details about their personal lives.
“Everyone is on edge and wary of each other now,” Schumacher says.
While turning yourself into a human captcha may be a fairly effective approach to operational security, even the most paranoid admit these checks create an atmosphere of distrust before two parties have even had the chance to really connect. They can also be a huge time suck. “I feel like something’s gotta give,” Yelland says. “I’m wasting so much time at work just trying to figure out if people are real.”
Jessica Eise, an assistant professor studying climate change and social behavior at Indiana University-Bloomington, says that her research team has been forced to essentially become digital forensics experts, due to the amount of fraudsters who respond to ads for paid virtual surveys. (Scammers aren’t as interested in the unpaid surveys, unsurprisingly.) If the research project is federally funded, all of the online participants have to be over the age of 18 and living in the US.
“My team would check time stamps for when participants answered emails, and if the timing was suspicious, we could guess they might be in a different time zone,” Eise says. “Then we’d look for other clues we came to recognize, like certain formats of email address or incoherent demographic data.”
Eise says the amount of time her team spent screening people was “exorbitant,” and that they’ve now shrunk the size of the cohort for each study and have turned to “snowball sampling” or having recruiting people they know personally to join their studies. The researchers are also handing out more physical flyers to solicit participants in person. “We care a lot about making sure that our data has integrity, that we’re studying who we say we’re trying to study,” she says. “I don’t think there’s an easy solution to this.”
Barring any widespread technical solution, a little common sense can go a long way in spotting bad actors. Yelland shared with me the slide deck that she received as part of the fake job pitch. At first glance, it seemed like legit pitch, but when she looked at it again, a few details stood out. The job promised to pay substantially more than the average salary for a similar role in her location, and offered unlimited vacation time, generous paid parental leave, and fully-covered health care benefits. In today’s job environment, that might have been the biggest tipoff of all that it was a scam.
27 notes
·
View notes
Text
LinkedIn sued for allegedly training AI models with private messages without consent
Source: https://therecord.media/linkedin-lawsuit-private-messages-ai-training
More info: https://www.documentcloud.org/documents/25502518-gov/
15 notes
·
View notes
Text
The robots have come for your em dashes
Did you know em dashes are a sign of AI writing? Since when? Well, that’s what all the LinkedIn influencers say! Humans don’t use them in writing, right? They use hyphens. Of course we use em dashes—they’re basic punctuation marks! We use em dashes as a pause for emphasis—like I did in the first sentence in this paragraph—or to set off a quasi-parenthetical phrase, like I just did in this sentence. A hyphen is used to form compound words, or to break words by syllable at the end of lines. They’re not the same. Nobody better tell the influencers about en dashes; they’ll spontaneously combust. So where’d this idea come from? You got me, but I’d guess it stems from the nature of AI-generated text. It always shows a certain level of polish: no spelling or grammatical mistakes, complete sentences, a clear organization and structure. LLMs get this right in their “first drafts” because the majority of text they were trained on is itself polished—and it turns out that this level of polish covers the correct use of em dashes. Is there a better way to detect AI-generated text than looking for em dashes, then, smart guy? All generative AI is, by literal definition, statistically median, and this isn’t something that can be overcome by creative prompt engineering. Flair and personality in human writing comes from individualistic, quirky choices born of a writer’s likes, dislikes, and life experiences. LLMs don’t have any of those. So LLM-generated text tends to read like pleasantly anodyne business memos, or book reports by a precocious ninth-grader. So if the text you’re playing “robot or not” with is a business memo, a high school paper, or a blog post by the kind of LinkedIn influencer who thinks em dashes are a sign of AI writing, it might be a challenge. In other circumstances, it shouldn’t be. But it apparently is! The problem—and, dear reader, there is no gentle way to put this—a lot of ostensibly smart people neither write particularly well, nor read enough to get a sense of what good writing is like. Most of the writing they deal with—memos, emails, text messages, Amazon and Yelp reviews—is slapdash, unpolished. So if they come across anything in those contexts that looks refined yet relatively generic, they suspect ChatGPT. Apparently, we’ve so come to expect a lack of writing style from humans—and I mean style in the sense of Strunk & White, not in the sense of actual style—that we’re suspicious of it when we see it. If you don’t want your writing to be mistaken for AI, what should you do? Use “fuck” more often. To support my writing, consider a tip on Ko-fi.com. (originally published at Coyote Tracks)
7 notes
·
View notes
Text
One obvious (to me) omission is the mention of how all of this affects disabled folks. You have people already heavily discriminated against for being different now being expected to jump through additional hoops to prove their humanity.
Digital imposter scams aren’t new; messaging platforms, social media sites, and dating apps have long been rife with fakery. In a time when remote work and distributed teams have become commonplace, professional communications channels are no longer safe, either. The same artificial intelligence tools that tech companies promise will boost worker productivity are also making it easier for criminals and fraudsters to construct fake personas in seconds. ... But some corporate professionals are turning instead to old-fashioned social engineering techniques to verify every fishy-seeming interaction they have. Welcome to the Age of Paranoia, when someone might ask you to send them an email while you’re mid-conversation on the phone, slide into your Instagram DMs to ensure the LinkedIn message you sent was really from you, or request you text a selfie with a timestamp, proving you are who you claim to be. Some colleagues say they even share code words with each other, so they have a way to ensure they’re not being misled if an encounter feels off. ... Ken Schumacher, founder of the recruitment verification service Ropes, says he’s worked with hiring managers who ask job candidates rapid-fire questions about the city where they claim to live on their resume, such as their favorite coffee shops and places to hang out. If the applicant is actually based in that geographic region, Schumacher says, they should be able to respond quickly with accurate details.
3 notes
·
View notes
Text
I don't want to write "content" anymore. They're just waiting for AI to do it all. Some are using AI already — it's so obvious on LinkedIn when somebody's cut and pasted some dumb message.
Also the pay is shrinking. I stopped writing freelance journalism because the pay went from $500 to $200 to $75. It's happening in "content" now too.
I want some corporate job that pays well and I'll write my little stories on my lunch breaks. I just don't know what I am qualified for after being a writer for so long.
8 notes
·
View notes
Text
The single most demoralizing experience I’ve had as an adult living in the 21st Century is applying for a job, not getting it, and then seeing incessant ads for the company on social media apps. I’m trying to waste a little time watching comedy reels or looking at photos of my friends’ kids on Instagram, and now I’m getting reminded that a place didn’t like me enough to hire me but would still love my business. It’s like getting ghosted by a woman after a few great dates, then seeing her in a fast food commercial months later when you’re minding your business, watching football. That’s also happened to me. It bugged me way less than a hiring manager never calling back.
I recently applied to be a social media manager for a wizard-themed restaurant in New York City aimed at — I guess — the rare millennial who feels too classy for Dave & Busters (adult Chuck E. Cheese) but still wants to experience a modicum of childlike wonder by eating in a restaurant that vaguely reminds them of Harry Potter (the Chuck E. Cheese of books). I didn’t get the job — nay — even an interview, but now every other image on my Instagram feed is a bubbling cauldron or a waiter/wizard throwing flames from a wand in front of hot people ignoring a plate of shrimp.
I’ve been out of the dating game for a long while, but from what I’ve learned from my single friends, I imagine the emotions felt while navigating rejection translate neatly into feelings I’ve had during a job search. In both instances, apps shoulder most of the blame. I’ve never used Tinder unless you count the rare but exhilarating moments when my friends allowed me to write their profiles or message a person on their behalf to see how weird I could make the conversation before the potential partner unmatched them. I have used ZipRecruiter though.
ZipRecruiter is a job-hunting website and app like LinkedIn except there’s a real chance you can get a job while using it. You upload your résumé and save a few searches, and soon you’re getting notifications for businesses for which you’d be a “Fair,” Good,” or “Great” match. You can then tap a single button to apply to most jobs after skimming the basic requirements and bullet points asking for a candidate “comfortable working in a fast-paced environment.” (They all say that or “detail-oriented.” It’s like someone in a 90s movie saying they want someone with a “good sense of humor.” Or a dude on Tinder now posting a profile photo holding a fish.) Like videos I’ve seen of men swiping right on every single person to see if anyone was interested, I spend most of my time on Ziprecruiter tapping “apply” while barely looking to see if I am even a “fair” match for the job.
ZipRecruiter also tells you how many times a hiring manager opens your application. When it’s more than three times, you receive an excited email from Phil, ZipRecruiter’s milquetoast AI career advisor with a cartoon face of a bespectacled man who looks annoyingly employed, letting you know that most applicants get an interview after a mere two looks, but three? It’s all but guaranteed. Start picking out your wardrobe for the interview. And then… nothing. Fuck you, Phil. With your perfect hair and your expendable income that you spend on polo shirts. How dare you lead me on like that?
Read the rest on Substack
31 notes
·
View notes
Text
Understanding Autonomous SDR Platforms: The Future of Sales Automation
The B2B sales world is changing fast thanks to autonomous SDR platforms—agentic AI systems that automate the entire sales development process. For startups and growing companies, especially founders, CROs, RevOps, and Demand-Gen leaders, this tech is a game-changer in 2025.
What’s an Autonomous SDR Platform?
Think of it as a super-smart AI Sales Development Rep that:
Processes huge data sets (from CRMs, websites, public info)
Makes decisions and plans outreach on its own
Crafts personalized multi-channel messages (email, LinkedIn, and more)
Learns and adapts with every interaction
Unlike basic automation, it works 24/7 independently, freeing your team to focus on closing deals.
Book a Demo: https://www.jeeva.ai/request-demo
How Does It Work?
Smart Prospecting: Finds high-potential leads using AI and buying signals
Data Enrichment: Builds rich profiles for truly personalized outreach
Multi-Channel Engagement: Runs emails, LinkedIn, and other touchpoints automatically
Lead Qualification: Talks to prospects, asks questions, and nurtures leads
Meeting Scheduling: Books qualified meetings directly into your calendar
Continuous Learning: Optimizes every step using real-time data
Why It Matters
Founders: Scale pipeline fast without adding costly headcount
CROs: Get steady, 24/7 lead flow and better conversion rates
RevOps: Automate tedious tasks and get clear data insights
Demand-Gen: Deliver hyper-personalized outreach that really connects
Challenges to Consider
Data quality is key
Integration with your CRM must be seamless
Keep the human touch for handoffs
Avoid spamming with poor personalization
Ensure ethical AI use and bias mitigation
The Future Is Here
Agentic AI SDRs like Jeeva AI are transforming sales development—handling lead gen, enrichment, multi-channel outreach, and booking meetings at less than 25% of human SDR cost. Ready to scale smarter?
Contact Jeeva AI 2708 Wilshire Blvd, Santa Monica, CA 90403 Phone: +1 424-645-7525 Email: [email protected]
2 notes
·
View notes
Text
Sonia’s Trying to Hold Herself Accountable to Just Write SOMETHING Once A Day challenge - Day 15
I have to remember that WRITING SOMETHING every day doesn't have to necessarily be creative writing. I am born of LiveJOURNAL after all :) So today's entry is just a journal entry cuz I'm feeling quite frantic and am hoping this might help me calm the hell down.
I got laid off 6 months ago, I'm almost positive TO THE DAY which was wild to realize. Reduction in force. Never had that happen. It was 5 weeks shy of my 8 year work anniversary. To say I was shooketh would STILL be an understatement and it has been a devastating 6 months ever since.
Because I was employed for at least the last 11 years. On top of that, my last application and interview would have been in September/October of 2016. To say that the job market is different would be a WILD UNDERSTATEMENT. I have been on LinkedIn, Indeed, Dice, and so many other sites, sending in my resume for the last 6 months and other than a handful of rejections and a BUTTLOAD of scam emails AND text messages (why are these things now???) offering me jobs I never applied for and I'm sure don't exist ... I have heard NOTHING back from anyone.
And it's not like I've been applying for jobs I'm wildly unqualified for. I've applied for Data Entry and admin jobs. And yet, I either don't hear anything at all (85% of the time) or I get generic probably AI generated rejections (15% of the time). And to tell y'all that my confidence and self esteem torpedoed into Hell? Again. Understatement. I'm on social media telling strangers who express the same feelings, "PLEASE REMEMBER YOUR WORTH IS NOT DETERMINED BY THIS PROCESS!!" Yet I couldn't internalize that for myself.
Then suddenly, last week, I had 3 different ACTUAL HUMAN BEINGS reach out to me about jobs. First on Indeed.com, someone local reached out to me about a local in office job. Second, I got an email invite from someone from a major company saying they wanted to talk to me on a 20 minute call about a job. Then as I was sitting at my laptop one day, I got a phone call from an area code I recognized as Pennsylvania cuz my first boyfriend was from that state lol so I picked it up ... and it was someone who'd seen my resume on one of the sites I put it up on (I didn't think that ever happened) asking about another potentially local job but unfortunately the commute would be 2 hours a day and just not work for me in that respect.
I can't figure out why I suddenly got a flurry of activity all at once? My brain, wanting to put it into black and white so I could try to make sense of it, wanted to say it MUST have something to do with maybe hiring freezes til the end of Q1? And I have no clue if that might be it or not. I just hope that the trend continues and I get more responses from actual real humans (not scammers)? Please.
I also had NO IDEA just how badly I was in the dumps until I got the first of these reach outs, which was a man from a local company. We had a phone interview first and we talked for about 40 minutes and even though I'm SURE I said too much and wrong things, he had extended an invite for me to go into the office this week. Unfortunately with the pay rate and potential hours he had to offer, it wouldn't make financial sense for me right now which sucks cuz the duties he described sounded RIGHT up my alley, the office only has 5 people in it, he would have no problem with me wearing my mask, not customer facing and it's only about 15 minutes from my residence. The money just isn't there. And I KNOW that a job is a job and in this united states we live in right now, I should probably just settle. But the pay is so far below what I was making at my last position and not even guaranteed full time that ... it just doesn't make sense.
But today I had my 2nd phone interview with the big company that extended me an invite. It was a Teams calls so I thought I'd have to be on camera. I went and got an emergency haircut and my hair re-purpled on Tuesday lol to be sure I didn't look like Hell only for her to tell me my camera didn't need to be on AND us having to switch to phone cuz I kept cutting out. Again, though, she liked what I had to say enough to pass me onto the NEXT interview (I've read stories of people having to go through FIVE ROUNDS of interviews for positions only to get ghosted?!?!), but the next interview is AN HOUR LONG.
.......what on Earth would I talk to anyone about IN A WORK ENVIRONMENT for 60 full minutes? I reached out to someone I trust (the only good manager I had at my last position and he wasn't even ever officially my manager) to ask for advice and he said be myself and answer honestly and if they ask me those questions that go "Tell me about a time..." and I don't have an actual example? LIE because they are trying to get to a certain quality so I need to figure out what that quality is and answer to show that I have it.
But guys, I'm SUCH a nervous Nellie and I just don't think I should or would EVER "be myself" around strangers for 60 minutes lol strangers don't deserve that but also? I'm a lot. I'm too much. I'm a mess and I just ... I don't know what to do lol
I've accepted the invite and it's set for 12:30pm CST next Wednesday but OMG I am shaking in my boots. Cuz I just ... for 6 months I've been made to feel unqualified and unworthy. Then suddenly, some people FINALLY saw something in words about me that prompted them to reach out to me cuz they think I have SOMETHING that might work out. And that's been a balm to my soul.
The possible new position would be brand new work for me, which I'm up for cuz not only do I think I need a challenge at this stage in my life ... it's an opportunity to REALLY HELP people with important stuff. And my heart and soul have been SCREAMING to please try and ideally find work that means something to a greater collective. I didn't think I'd ever find that but it's possible this position is. And it's fully remote so Halleloo. It's just that once again, the pay isn't quite there. Buuuuuuuuuuuuut, is the pay ever going to be there? I'm guessing no and with the state of things where I live ... while the ideal is NOT to settle, I'm afraid I just should. People I talked to today have said, "If you find out you don't like it, it's not permanent, you don't have to stick with it if it turns out to not be for you." Which is funny because that never occurred to me because I'm the type of person that once I make a commitment, even if I'm miserable? I don't think about leaving cuz I committed. And how foul would it be to sign onto something, make the company think they found someone, then a few days in be like SORRY PEACE! and leave?
Sigh. Idk. If anyone has any advice or anecdotes they want to offer, please feel free. I'm open to advice, words of wisdom, or just well wishes lol
If by some miracle you read through all of this:
4 notes
·
View notes
Text
Social Media Algorithms: What Marketers Need to Know!
Introduction
Social media algorithms are the invisible forces that determine what content users see on their feeds. These algorithms analyze user behavior, engagement, and preferences to curate personalized content. For marketers, understanding these algorithms is crucial to maximizing reach, engagement, and conversions. This article delves into how major social media platforms use algorithms and how marketers can optimize their strategies accordingly.
Understanding Social Media Algorithms
Social media algorithms use machine learning and artificial intelligence to prioritize content based on relevance, rather than chronology. They consider various factors such as engagement, user interactions, and content type to tailor feeds.
Key Factors Influencing Social Media Algorithms
Engagement Metrics – High interaction rates increase content visibility.
Relevance – Algorithms analyze user preferences to suggest content.
Regency – Newer posts often get priority.
User Activity – Frequency and type of content users engage with affect what they see.
Content Type – Video, images, and interactive content tend to perform better.
Paid Promotions – Ads and sponsored content get preferential treatment.
Platform-Specific Algorithm Insights
1. Facebook Algorithm
Prioritizes content from family, friends, and groups over brand pages.
Uses machine learning to rank posts based on interactions.
Factors include meaningful engagement and video watch time.
Marketing Tips: Use interactive posts, live videos, and encourage discussions.
2. Instagram Algorithm
Ranks posts based on interest, relationship, timeliness, and usage patterns.
Gives priority to Reels and Stories due to high engagement.
Shopping features impact content visibility for e-commerce brands.
Marketing Tips: Utilize hashtags, post consistently, and engage with users through Stories and interactive stickers.
3. X (formerly Twitter) Algorithm
Mixes real-time tweets with ranked content.
Prioritizes tweets with high engagement within the first few minutes.
Factors include recency, engagement, and media type.
Marketing Tips: Tweet frequently, use trending hashtags, and engage with followers.
4. LinkedIn Algorithm
Prioritizes professional content with high engagement and comments.
Encourages long-form content, videos, and industry discussions.
Marketing Tips: Post valuable insights, engage with other posts, and use LinkedIn polls and articles.
5. TikTok Algorithm
Relies on user interaction, video information, and device settings.
The “For You” page is personalized for each user.
Marketing Tips: Create engaging, short-form content, participate in trends, and use viral sounds.
6. YouTube Algorithm
Recommends videos based on watch history, engagement, and subscriptions.
Encourages longer watch times.
Marketing Tips: Optimize video titles, descriptions, and use compelling thumbnails.
Strategies to Work with Social Media Algorithms
1. Focus on High-Quality Content
Create engaging, valuable, and shareable content.
Use storytelling to captivate audiences.
2. Leverage Video Content
Short-form videos perform exceptionally well.
Live streaming boosts engagement rates.
3. Engage with Your Audience
Respond to comments and messages promptly.
Encourage discussions and user-generated content.
4. Optimize Posting Times
Identify peak engagement hours for each platform.
Post consistently to maintain visibility.
5. Utilize Hashtags and Keywords
Use relevant hashtags to expand reach.
Incorporate SEO-friendly keywords in captions and descriptions.
6. Invest in Paid Advertising
Run targeted ad campaigns to reach specific demographics.
Use A/B testing to optimize ad performance.
7. Collaborate with Influencers
Partner with influencers to amplify reach and credibility.
Micro-influencers often provide better engagement than celebrities.
The Future of Social Media Algorithms
As AI technology evolves, social media algorithms will become more personalized and predictive. Future trends include:
Increased AI and Machine Learning – Enhanced content recommendations.
Greater Focus on Privacy – More transparency in algorithm operations.
Rise of Decentralized Social Platforms – Users may gain more control over their feeds.
Conclusion
Understanding and adapting to social media algorithms is key to digital marketing success. By leveraging high-quality content, engagement strategies, and data-driven insights, marketers can navigate algorithm changes effectively. Staying informed and flexible will ensure sustained growth and visibility in an ever-changing digital landscape.
2 notes
·
View notes
Text
On paper, the first candidate looked perfect. Thomas was from rural Tennessee and had studied computer science at the University of Missouri. His résumé said he’d been a professional programmer for eight years, and he’d breezed through a preliminary coding test. All of this was excellent news for Thomas’ prospective boss, Simon Wijckmans, founder of the web security startup C.Side. The 27-year-old Belgian was based in London but was looking for ambitious, fully remote coders.
Thomas had an Anglo-Saxon surname, so Wijckmans was surprised when he clicked into his Google Meet and found himself speaking with a heavily accented young man of Asian origin. Thomas had set a generic image of an office as his background. His internet connection was laggy—odd for a professional coder—and his end of the call was noisy. To Wijckmans, Thomas sounded like he was sitting in a large, crowded space, maybe a dorm or a call center.
Wijckmans fired off his interview questions, and Thomas’ responses were solid enough. But Wijckmans noticed that Thomas seemed most interested in asking about his salary. He didn’t come across as curious about the actual work or about how the company operated or even about benefits like startup stock or health coverage. Odd, thought Wijckmans. The conversation came to a close, and he got ready for the next interview in his queue.
Once again, the applicant said they were based in the US, had an Anglo name, and appeared to be a young Asian man with a thick, non-American accent. He used a basic virtual background, was on a terrible internet connection, and had a single-minded focus on salary. This candidate, though, was wearing glasses. In the lenses, Wijckmans spotted the reflection of multiple screens, and he could make out a white chatbox with messages scrolling by. “He was clearly either chatting with somebody or on some AI tool,” Wijckmans remembers.
On high alert, Wijckmans grabbed screenshots and took notes. After the call ended, he went back over the job applications. He found that his company’s listings were being flooded with applicants just like these: an opening for a full-stack developer got more than 500 applications in a day, far more than usual. And when he looked more deeply into the applicants’ coding tests, he saw that many candidates appeared to have used a virtual private network, or VPN, which allows you to mask your computer’s true location.
Wijckmans didn’t know it yet, but he’d stumbled onto the edges of an audacious, global cybercrime operation. He’d unwittingly made contact with an army of seemingly unassuming IT workers, deployed to work remotely for American and European companies under false identities, all to bankroll the government of North Korea.
With a little help from some friends on the ground, of course.
christina chapman was living in a trailer in Brook Park, Minnesota, a hamlet north of Minneapolis, when she got a note from a recruiter that changed her life. A bubbly 44-year-old with curly red hair and glasses, she loved her dogs and her mom and posting social justice content on TikTok. In her spare time she listened to K-pop, enjoyed Renaissance fairs, and got into cosplay. Chapman was also, according to her sparse online résumé, learning to code online.
It was March 2020 when she clicked on the message in her LinkedIn account. A foreign company was looking for somebody to “be the US face” of the business. The company needed help finding remote employment for overseas workers. Chapman signed on. It’s unclear how fast her workload grew, but by October 2022 she could afford a move from chilly Minnesota to a low-slung, four-bedroom house in Litchfield Park, Arizona. It wasn’t fancy—a suburban corner lot with a few thin trees—but it was a big upgrade over the trailer.
Chapman then started documenting more of her life on TikTok and YouTube, mostly talking about her diet, fitness, or mental health. In one chatty video, shared in June 2023, she described grabbing breakfast on the go—an açaí bowl and a smoothie— because work was so busy. “My clients are going crazy!” she complained. In the background, the camera caught a glimpse of metal racks holding at least a dozen open laptops covered in sticky notes. A few months later, federal investigators raided Chapman’s home, seized the laptops, and eventually filed charges alleging that she had spent three years aiding the “illicit revenue generation efforts” of the government of North Korea.
For maybe a decade, North Korean intelligence services have been training young IT workers and sending them abroad in teams, often to China or Russia. From these bases, they scour the web for job listings all over, usually in software engineering, and usually with Western companies. They favor roles that are fully remote, with solid wages, good access to data and systems, and few responsibilities. Over time they began applying for these jobs using stolen or fake identities and relying on members of their criminal teams to provide fictional references; some have even started using AI to pass coding tests, video interviews, and background checks.
But if an applicant lands a job offer, the syndicate needs somebody on the ground in the country the applicant claims to live in. A fake employee, after all, can’t use the addresses or bank accounts linked to their stolen IDs, and they can’t dial in to a company’s networks from overseas without instantly triggering suspicion. That’s where someone like Christina Chapman comes in.
As the “facilitator” for hundreds of North Korea–linked jobs, Chapman signed fraudulent documents and handled some of the fake workers’ salaries. She would often receive their paychecks in one of her bank accounts, take a cut, and wire the rest overseas: Federal prosecutors say Chapman was promised as much as 30 percent of the money that passed through her hands.
Her most important job, though, was tending the “laptop farm.” After being hired, a fake worker will typically ask for their company computer to be sent to a different address than the one on record—usually with some tale about a last-minute move or needing to stay with a sick relative. The new address, of course, belongs to the facilitator, in this case Chapman. Sometimes the facilitator forwards the laptop to an address overseas, but more commonly that person holds onto it and installs software that allows it to be controlled remotely. Then the fake employee can connect to their machine from anywhere in the world while appearing to be in the US. (“You know how to install Anydesk?” one North Korean operative asked Chapman in 2022. “I do it practically EVERYDAY!” she replied.)
In messages with her handlers, Chapman discussed sending government forms like the I-9, which attests that a person is legally able to work in the US. (“I did my best to copy your signature,” she wrote. “Haha. Thank you,” came the response.) She also did basic tech troubleshooting and dialed into meetings on a worker’s behalf, sometimes on short notice, as in this conversation from November 2023:
Worker: We are going to have laptop setup meeting in 20 mins. Can you join Teams meeting and follow what IT guy say? Because it will require to restart laptop multiple times and I can not handle that. You can mute and just follow what they say ...
Chapman: Who do I say I am?
Worker: You don’t have to say, I will be joining there too.
Chapman: I just typed in the name Daniel. If they ask WHY you are using two devices, just say the microphone on your laptop doesn’t work right ... Most IT people are fine with that explanation.
Sometimes, she got jumpy. “I hope you guys can find other people to do your physical I9s,” she wrote to her bosses in 2023, according to court documents. “I will SEND them for you, but have someone else do the paperwork. I can go to FEDERAL PRISON for falsifying federal documents.” Michael Barnhart, an investigator at cybersecurity company DTEX and a leading expert on the North Korean IT worker threat, says Chapman’s involvement followed a standard pattern—from an innocuous initial contact on LinkedIn to escalating requests. “Little by little, the asks get bigger and bigger,” he says. “Then by the end of the day, you’re asking the facilitator to go to a government facility to pick up an actual government ID.”
By the time investigators raided Chapman’s home, she was housing several dozen laptops, each with a sticky note indicating the fake worker’s identity and employer. Some of the North Korean operatives worked multiple jobs; some had been toiling quietly for years. Prosecutors said at least 300 employers had been pulled into this single scheme, including “a top-five national television network and media company, a premier Silicon Valley technology company, an aerospace and defense manufacturer, an iconic American car manufacturer, a high-end retail store, and one of the most recognizable media and entertainment companies in the world.” Chapman, they alleged, had helped pass along at least $17 million. She pleaded guilty in February 2025 to charges relating to wire fraud, identity theft, and money laundering and is awaiting sentencing.
Chapman’s case is just one of several North Korean fake-worker prosecutions making their way through US courts. A Ukrainian named Oleksandr Didenko has been accused of setting up a freelancing website to connect fake IT workers with stolen identities. Prosecutors say at least one worker was linked to Chapman’s laptop farm and that Didenko also has ties to operations in San Diego and Virginia. Didenko was arrested in Poland last year and was extradited to the United States. In Tennessee, 38-year-old Matthew Knoot is due to stand trial for his alleged role in a scheme that investigators say sent hundreds of thousands of dollars to accounts linked to North Korea via his laptop farm in Nashville. (Knoot has pleaded not guilty.) And in January 2025, Florida prosecutors filed charges against two American citizens, Erick Ntekereze Prince and Emanuel Ashtor, as well as a Mexican accomplice and two North Koreans. (None of the defendants’ lawyers in these cases responded to requests for comment.) The indictments claim that Prince and Ashtor had spent six years running a string of fake staffing companies that placed North Koreans in at least 64 businesses.
before the hermit kingdom had its laptop farms, it had a single confirmed internet connection, at least as far as the outside world could tell. As recently as 2010, that one link to the web was reserved for use by high-ranking officials. Then, in 2011, 27-year-old Kim Jong Un succeeded his father as the country’s dictator. Secretly educated in Switzerland and said to be an avid gamer, the younger Kim made IT a national priority. In 2012, he urged some schools to “pay special attention to intensifying their computer education” to create new possibilities for the government and military. Computer science is now on some high school curricula, while college students can take courses on information security, robotics, and engineering.
The most promising students are taught hacking techniques and foreign languages that can make them more effective operatives. Staff from government agencies including the Reconnaissance General Bureau— the nation’s clandestine intelligence service—recruit the highest-scoring graduates of top schools like Kim Chaek University of Technology (described by many as “the MIT of North Korea”) or the prestigious University of Sciences in Pyongsong. They are promised good wages and unfettered access to the internet—the real internet, not the intranet available to well-off North Koreans, which consists of a mere handful of heavily censored North Korean websites.
The earliest cyberattacks launched by Pyongyang were simple affairs: defacing websites with political messages or launching denial-of-service attacks to shut down US websites. They soon grew more audacious. In 2014, North Korean hackers famously stole and leaked confidential information from Sony’s film studio. Then they targeted financial institutions: Fraudulent trades pulled more than $81 million from the Bank of Bangladesh’s accounts at the New York Federal Reserve. After that, North Korean hackers moved into ransomware—the WannaCry attack in 2017 locked hundreds of thousands of Windows computers in 150 countries and demanded payments in bitcoin. While the amount of revenue the attack generated is up for debate—some say it earned just $140,000 in payouts—it wreaked much wider damage as companies worked to upgrade their systems and security, costing as much as $4 billion, according to one estimate.
Governments responded with more sanctions and stronger security measures, and the regime pivoted, dialing back on ransomware in favor of quieter schemes. It turns out these are also more lucrative: Today, the most valuable tool in North Korea’s cybercrime armory is cryptocurrency theft. In 2022, hackers stole more than $600 million worth of the cryptocurrency ether by attacking the blockchain game Axie Infinity; in February of this year, they robbed the Dubai-based crypto exchange Bybit of $1.5 billion worth of digital currency. The IT pretender scam, meanwhile, seems to have been growing slowly until the pandemic dramatically expanded the number of remote jobs, and Pyongyang saw the perfect opportunity.
In 2024, according to a recent report from South Korea’s National Intelligence Service, the number of people working in North Korea’s cyber divisions—which includes pretenders, crypto thieves, and military hackers—stood at 8,400, up from 6,800 two years earlier. Some of these workers are based in the country, but many are stationed overseas in China, Russia, Pakistan, or elsewhere. They are relatively well compensated, but their posting is hardly cushy.
Teams of 10 to 20 young men live and work out of a single apartment, sleeping four or five to a room and grinding up to 14 hours a day at weird hours to correspond with their remote job’s time zone. They have quotas of illicit earnings they are expected to meet. Their movements are tightly controlled, as are those of their relatives, who are effectively held hostage to prevent defections. “You don’t have any freedom,” says Hyun-Seung Lee, a North Korean defector who lives in Washington, DC, and says some of his old friends were part of such operations. “You’re not allowed to leave the apartment unless you need to purchase something, like grocery shopping, and that is arranged by the team leader. Two or three people must go together so there’s no opportunity for them to explore.”
The US government estimates that a typical team of pretenders can earn up to $3 million each year for Pyongyang. Experts say the money is pumped into everything from Kim Jong Un’s personal slush fund to the country’s nuclear weapons program. A few million dollars may seem small next to the flashy crypto heists— but with so many teams operating in obscurity, the fraud is effective precisely because it is so mundane.
in the summer of 2022, a major multinational company hired a remote engineer to work on website development. “He would dial in to meetings, he would participate in discussions,” an executive at the company told me on condition of anonymity. “His manager said he was considered the most productive member of the team.”
One day, his coworkers organized a surprise to celebrate his birthday. Colleagues gathered on a video call to congratulate him, only to be startled by his response—but it’s not my birthday. After nearly a year at the company, the worker had apparently forgotten the birth date listed in his records. It was enough to spark suspicion, and soon afterward the security team discovered that he was running remote access tools on his work computer, and he was let go. It was only later, when federal investigators discovered one of his pay stubs at Christina Chapman’s laptop farm in Arizona, that the company connected the dots and realized it had employed a foreign agent for nearly a year.
For many pretenders, the goal is simply to earn a good salary to send back to Pyongyang, not so much to steal money or data. “We’ve seen long-tail operations where they were going 10, 12, 18 months working in some of these organizations,” says Adam Meyers, a senior vice president for counter adversary operations at the security company CrowdStrike. Sometimes, though, North Korean operatives last just a few days— enough time to download huge amounts of company data or plant malicious software in a company’s systems before abruptly quitting. That code could alter financial data or manipulate security information. Or these seeds could lay dormant for months, even years.
“The potential risk from even one minute of access to systems is almost unlimited for an individual company,” says Declan Cummings, the head of engineering at software company Cinder. Experts say that attacks are ramping up not just in the US but also in Germany, France, Britain, Japan and other countries. They urge companies to do rigorous due diligence: speak directly to references, watch for candidates making sudden changes of address, use reputable online screening tools, and conduct a physical interview or in-person ID verification.
But none of these methods are foolproof, and AI tools are constantly weakening them. ChatGPT and the like give almost anyone the capacity to answer esoteric questions in real time with unearned confidence, and their fluency with coding threatens to make programming tests irrelevant. AI video filters and deepfakes can also add to the subterfuge.
At an onboarding call, for instance, many HR representatives now ask new employees to hold their ID up to the camera for closer inspection. “But the fraudsters have a neat trick there,” says Donal Greene, a biometrics expert at the online background check provider Certn. They take a green-colored card the exact shape and size of an identity card—a mini green screen—and, using deepfake technology, project the image of an ID onto it. “They can actually move it and show the reflection,” says Greene. “It’s very sophisticated.” North Korean agents have even been known to send look-alikes to pick up a physical ID card from an office or to take a drug test required by prospective employers.
Even security experts can be fooled. In July 2024, Knowbe4, a Florida-based company that offers security training, discovered that a new hire known as “Kyle” was actually a foreign agent. “He interviewed great,” says Brian Jack, KnowBe4’s chief information security officer. “He was on camera, his résumé was right, his background check cleared, his ID cleared verification. We didn’t have any reason to suspect this wasn’t a valid candidate.” But when his facilitator—the US-based individual giving him cover—tried to install malware on Kyle’s company computer, the security team caught on and shut him out.
Back in london, Simon Wijckmans couldn’t let go of the idea that somebody had tried to fool him. He’d just read about the Knowbe4 case, which deepened his suspicions. He conducted background checks and discovered that some of his candidates were definitely using stolen identities. And, he found, some of them were linked to known North Korean operations. So Wijckmans decided to wage a little counter exercise of his own, and he invited me to observe.
I dial in to Google Meet at 3 am Pacific time, tired and bleary. We deliberately picked this offensively early hour because it’s 6 am in Miami, where the candidate, “Harry,” claims to be.
Harry joins the call, looking pretty fresh-faced. He’s maybe in his late twenties, with short, straight, black hair. Everything about him seems deliberately nonspecific: He wears a plain black crewneck sweater and speaks into an off-brand headset. “I just woke up early today for this interview, no problem,” he says. “I know that working with UK hours is kind of a requirement, so I can get my working hours to yours, so no problem with it.”
So far, everything matches the hallmarks of a fake worker. Harry’s virtual background is one of the default options provided by Google Meet, and his connection is a touch slow. His English is good but heavily accented, even though he tells us he was born in New York and grew up in Brooklyn. Wijckmans starts with some typical interview questions, and Harry keeps glancing off to his right as he responds. He talks about various coding languages and name-drops the frameworks he’s familiar with. Wijckmans starts asking some deeper technical questions. Harry pauses. He looks confused. “Can I rejoin the meeting?” he asks. “I have a problem with my microphone.” Wijckman nods, and Harry disappears.
A couple of minutes pass, and I start to fret that we’ve scared him away, but then he pops back into the meeting. His connection isn’t much better, but his answers are clearer. Maybe he restarted his chatbot, or got a coworker to coach him. The call runs a few more minutes and we say goodbye.
Our next applicant calls himself “Nic.” On his résumé he’s got a link to a personal website, but this guy doesn’t look much like the profile photo on the site. This is his second interview with Wijckmans, and we are certain that he’s faking it: He’s one of the applicants who failed the background check after his first call, although he doesn’t know that.
Nic’s English is worse than Harry’s: When he’s asked what time it is, he tells us it’s “six and past” before correcting himself and saying “quarter to seven.” Where does he live? “I’m in Ohio for now,” he beams, like a kid who got something right in a pop quiz.
Several minutes in, though, his answers become nonsensical. Simon asks him a question about web security. “Political leaders ... government officials or the agencies responsible for border security,” Nic says. “They’re responsible for monitoring and also securing the borders, so we can employ the personnel to patrol the borders and also check the documents and enforce the immigration laws.”
I’m swapping messages with Wijckmans on the back channel we’ve set up when it dawns on us: Whatever AI bot Nic seems to be using must have misinterpreted a mention of “Border Gateway Protocol”—a system for sending traffic across the internet—with national borders, and started spewing verbiage about immigration enforcement. “What a waste of time,” Wijckmans messages me. We wrap up the conversation abruptly.
I try to put myself in the seat of a hiring manager or screener who’s under pressure. The fraudsters’ words may not have always made sense, but their test scores and résumés looked solid, and their technical-sounding guff might be enough to fool an uninformed recruiter. I suspect at least one of them could have made it to the next step in some unsuspecting company’s hiring process.
Wijckmans tells me he has a plan if he comes across another pretender. He has created a web page that looks like a standard coding assessment, which he’ll send to fake candidates. As soon as they hit the button to start the test, their browser will spawn dozens of pop-up pages that bounce around the screen, all of them featuring information on how to defect from North Korea. Then loud music plays—a rickroll, “The Star-Spangled Banner”—before the computer starts downloading random files and emits an ear-splitting beep. “Just a little payback,” he says.
Wijckman’s stunt is not going to stop the pretenders, of course. But maybe it will irritate them for a moment. Then they’ll get back to work, signing on from some hacking sweatshop in China or through a laptop farm in the US, and join the next team meeting—a quiet, camera-off chat with coworkers just like me or you.
7 notes
·
View notes