#structured query language
Explore tagged Tumblr posts
digitaldetoxworld · 1 month ago
Text
Structured Query Language (SQL): A Comprehensive Guide
 Structured Query Language, popularly called SQL (reported "ess-que-ell" or sometimes "sequel"), is the same old language used for managing and manipulating relational databases. Developed in the early 1970s by using IBM researchers Donald D. Chamberlin and Raymond F. Boyce, SQL has when you consider that end up the dominant language for database structures round the world.
Structured query language commands with examples
Tumblr media
Today, certainly every important relational database control system (RDBMS)—such as MySQL, PostgreSQL, Oracle, SQL Server, and SQLite—uses SQL as its core question language.
What is SQL?
SQL is a website-specific language used to:
Retrieve facts from a database.
Insert, replace, and delete statistics.
Create and modify database structures (tables, indexes, perspectives).
Manage get entry to permissions and security.
Perform data analytics and reporting.
In easy phrases, SQL permits customers to speak with databases to shop and retrieve structured information.
Key Characteristics of SQL
Declarative Language: SQL focuses on what to do, now not the way to do it. For instance, whilst you write SELECT * FROM users, you don’t need to inform SQL the way to fetch the facts—it figures that out.
Standardized: SQL has been standardized through agencies like ANSI and ISO, with maximum database structures enforcing the core language and including their very own extensions.
Relational Model-Based: SQL is designed to work with tables (also called members of the family) in which records is organized in rows and columns.
Core Components of SQL
SQL may be damaged down into numerous predominant categories of instructions, each with unique functions.
1. Data Definition Language (DDL)
DDL commands are used to outline or modify the shape of database gadgets like tables, schemas, indexes, and so forth.
Common DDL commands:
CREATE: To create a brand new table or database.
ALTER:     To modify an present table (add or put off columns).
DROP: To delete a table or database.
TRUNCATE: To delete all rows from a table but preserve its shape.
Example:
sq.
Copy
Edit
CREATE TABLE personnel (
  id INT PRIMARY KEY,
  call VARCHAR(one hundred),
  income DECIMAL(10,2)
);
2. Data Manipulation Language (DML)
DML commands are used for statistics operations which include inserting, updating, or deleting information.
Common DML commands:
SELECT: Retrieve data from one or more tables.
INSERT: Add new records.
UPDATE: Modify existing statistics.
DELETE: Remove information.
Example:
square
Copy
Edit
INSERT INTO employees (id, name, earnings)
VALUES (1, 'Alice Johnson', 75000.00);
three. Data Query Language (DQL)
Some specialists separate SELECT from DML and treat it as its very own category: DQL.
Example:
square
Copy
Edit
SELECT name, income FROM personnel WHERE profits > 60000;
This command retrieves names and salaries of employees earning more than 60,000.
4. Data Control Language (DCL)
DCL instructions cope with permissions and access manage.
Common DCL instructions:
GRANT: Give get right of entry to to users.
REVOKE: Remove access.
Example:
square
Copy
Edit
GRANT SELECT, INSERT ON personnel TO john_doe;
five. Transaction Control Language (TCL)
TCL commands manage transactions to ensure data integrity.
Common TCL instructions:
BEGIN: Start a transaction.
COMMIT: Save changes.
ROLLBACK: Undo changes.
SAVEPOINT: Set a savepoint inside a transaction.
Example:
square
Copy
Edit
BEGIN;
UPDATE personnel SET earnings = income * 1.10;
COMMIT;
SQL Clauses and Syntax Elements
WHERE: Filters rows.
ORDER BY: Sorts effects.
GROUP BY: Groups rows sharing a assets.
HAVING: Filters companies.
JOIN: Combines rows from  or greater tables.
Example with JOIN:
square
Copy
Edit
SELECT personnel.Name, departments.Name
FROM personnel
JOIN departments ON personnel.Dept_id = departments.Identity;
Types of Joins in SQL
INNER JOIN: Returns statistics with matching values in each tables.
LEFT JOIN: Returns all statistics from the left table, and matched statistics from the right.
RIGHT JOIN: Opposite of LEFT JOIN.
FULL JOIN: Returns all records while there is a in shape in either desk.
SELF JOIN: Joins a table to itself.
Subqueries and Nested Queries
A subquery is a query inside any other query.
Example:
sq.
Copy
Edit
SELECT name FROM employees
WHERE earnings > (SELECT AVG(earnings) FROM personnel);
This reveals employees who earn above common earnings.
Functions in SQL
SQL includes built-in features for acting calculations and formatting:
Aggregate Functions: SUM(), AVG(), COUNT(), MAX(), MIN()
String Functions: UPPER(), LOWER(), CONCAT()
Date Functions: NOW(), CURDATE(), DATEADD()
Conversion Functions: CAST(), CONVERT()
Indexes in SQL
An index is used to hurry up searches.
Example:
sq.
Copy
Edit
CREATE INDEX idx_name ON employees(call);
Indexes help improve the performance of queries concerning massive information.
Views in SQL
A view is a digital desk created through a question.
Example:
square
Copy
Edit
CREATE VIEW high_earners AS
SELECT call, salary FROM employees WHERE earnings > 80000;
Views are beneficial for:
Security (disguise positive columns)
Simplifying complex queries
Reusability
Normalization in SQL
Normalization is the system of organizing facts to reduce redundancy. It entails breaking a database into multiple related tables and defining overseas keys to link them.
1NF: No repeating groups.
2NF: No partial dependency.
3NF: No transitive dependency.
SQL in Real-World Applications
Web Development: Most web apps use SQL to manipulate customers, periods, orders, and content.
Data Analysis: SQL is extensively used in information analytics systems like Power BI, Tableau, and even Excel (thru Power Query).
Finance and Banking: SQL handles transaction logs, audit trails, and reporting systems.
Healthcare: Managing patient statistics, remedy records, and billing.
Retail: Inventory systems, sales analysis, and consumer statistics.
Government and Research: For storing and querying massive datasets.
Popular SQL Database Systems
MySQL: Open-supply and extensively used in internet apps.
PostgreSQL: Advanced capabilities and standards compliance.
Oracle DB: Commercial, especially scalable, agency-degree.
SQL Server: Microsoft’s relational database.
SQLite: Lightweight, file-based database used in cellular and desktop apps.
Limitations of SQL
SQL can be verbose and complicated for positive operations.
Not perfect for unstructured information (NoSQL databases like MongoDB are better acceptable).
Vendor-unique extensions can reduce portability.
Java Programming Language Tutorial
Dot Net Programming Language
C ++ Online Compliers 
C Language Compliers 
2 notes · View notes
newcodesociety · 4 months ago
Text
0 notes
literaticat · 26 days ago
Note
Is it ethical to use Chat GPT or Grammarly for line editing purposes? I have a finished book, 100% written by me and line edited by me already--and I do hope to get it traditionally published. But I think it could benefit from a line edit from someone who isn't me, obviously, before querying. But line editing services run $3-4k for a 75k book, which is beyond my budget.
I was chatting with someone recently who self-publishes. They said they use Chat GPT Plus to actually train a model for their projects to line edit using instructions like (do not rewrite or rephrase for content /edit only for rhythm, clarity, tone, and pacing /preserve my voice, sentence structure, and story intent with precision). Those are a few inputs she used and she said it actually worked really well.
So in that case, is AI viewed in the same way you'd collaborate with a human editor? Or does that cross ethical boundaries in traditional publishing? Like say for instance AI rewords your sentence and maybe switches out for a stronger verb or adjective or a stronger metaphor--is using that crossing a line? And if I were to use it for that purpose, would I need to disclose that? I know AI is practically a swear word among authors and publishers right now, so I think even having to say "I used AI tools" might raise eyebrows and make an agent hesitant during the querying process. But obviously, I wouldn't lie if it needs to be disclosed... just not sure I even want to go there and risk having to worry about that. Thoughts? Am I fine? Overthinking it?
Thanks!
I gotta be honest, this question made me flinch so hard I'm surprised my face didn't turn inside out.
Feeding your original work into ChatGPT or a similar generative AI large language model -- which are WELL KNOWN FOR STEALING EVERYTHING THAT GETS PUT INTO THEM AND SPITTING OUT STOLEN MATERIAL-- feels like, idk, just a terrible idea. Letting that AI have ANY kind of control over your words and steal them feels like a terrible idea. Using any words that a literal plagiarism-bot might come up with for you feels like a terrible idea.
And ethical questions aside: AI is simply not good at writing fiction. It doesn't KNOW anything. You want to take its "advice" on your book? Come on. Get it together.
Better idea: Get a good critique group that can tell you if there are major plot holes, characters whose motivations are unclear, anything like that -- those are things that AI can't help you with, anyway. Then read Self-Editing for Fiction Writers -- that info combined with a bit of patience should stand you in good stead.
Finally, I do think that using spell-check/grammarly, either as you work or to check your work, is fine. It's not rewriting your work for you, it's just pointing out typos/mistakes/potential issues, and YOU, PERSONALLY, are going through each and every one to make the decision of how to fix any actual errors that might have snuck in there, and you, personally, are making the decision about when to use a "stronger" word or phrase or recast a sentence that it thinks might be unclear or when to stet for voice, etc. Yes, get rid of typos and real mistakes, by all means!
(And no, I don't think use of that kind of "spell-check/grammar-check" tool is a problem or anything that you need to "disclose" or feel weird about -- spell-check is like, integrated into most word processing software as a rule, it's ubiquitous and helpful, and it's different from feeding your work into some third-party AI thing!)
343 notes · View notes
not-terezi-pyrope · 1 month ago
Text
AI continues to be useful, annoying everyone
Okay, look - as much as I've been fairly on the side of "this is actually a pretty incredible technology that does have lots of actual practical uses if used correctly and with knowledge of its shortfalls" throughout the ongoing "AI era", I must admit - I don't use it as a tool too much myself.
I am all too aware of how small errors can slip in here and there, even in output that seems above the level, and, perhaps more importantly, I still have a bit of that personal pride in being able to do things myself! I like the feeling that I have learned a skill, done research on how to do a thing and then deployed that knowledge to get the result I want. It's the bread and butter of working in tech, after all.
But here's the thing, once you move beyond beginner level Python courses and well-documented windows applications. There will often be times when you will want to achieve a very particular thing, which involves working with a specialist application. This will usually be an application written for domain experts of this specialization, and so it will not be user-friendly, and it will certainly not be "outsider-friendly".
So you will download the application. Maybe it's on the command line, has some light scripting involved in a language you've never used, or just has a byzantine shorthand command structure. There is a reference document - thankfully the authors are not that insane - but there are very few examples, and none doing exactly what you want. In order to do the useful thing you want to do, they expect you to understand how the application/platform/scripting language works, to the extent that you can apply it in a novel context.
Which is all fine and well, and normally I would not recommend anybody use a tool at length unless they have taken the time to understand it to the degree at which they know what they are doing. Except I do not wish to use the tool at length, I wish to do one, singular operation, as part of a larger project, and then never touch it again. It is unfortunately not worth my time for me to sink a few hours into learning a technology that you will use once for twenty seconds and then never again.
So you spend time scouring the specialist forums, pulling up a few syntax examples you find randomly of their code and trying to string together the example commands in the docs. If you're lucky, and the syntax has enough in common with something you're familiar with, you should be able to bodge together something that works in 15-20 minutes.
But if you're not lucky, the next step would have been signing up to that forum, or making a post on that subreddit, creating a thread called "Hey, newbie here, needing help with..." and then waiting 24-48 hours to hear back from somebody probably some years-deep veteran looking down on you with scorn for not having put in the effort to learn their Thing, setting aside the fact that you have no reason to normally. It's annoying, disruptive, and takes time.
Now I can ask ChatGPT, and it will have ingested all those docs, all those forums, and it will give you a correct answer in 20 seconds about what you were doing wrong. Because friends, this is where a powerful attention model excels, because you are not asking it to manage a complex system, but to collate complex sources into a simple synthesis. The LLM has already trained in this inference, and it can reproduce it in the blink of an eye, and then deliver information about this inference in the form of a user dialog.
When people say that AI is the future of tutoring, this is what it means. Instead of waiting days to get a reply from a bored human expert, the machine knowledge blender has already got it ready to retrieve via a natural language query, with all the followup Q&A to expand your own knowledge you could desire. And the great thing about applying this to code or scripting syntax is that you can immediately verify whether the output is correct but running it and seeing if it performs as expected, so a lot of the danger is reduced (not that any modern mainstream attention model is likely to make a mistake on something as simple a single line command unless it's something barely documented online, that is).
It's incredibly useful, and it outdoes the capacity of any individual human researcher, as well as the latency of existing human experts. That's something you can't argue we've ever had better before, in any context, and it's something you can actively make use of today. And I will, because it's too good not to - despite my pride.
130 notes · View notes
thehomophobe · 4 months ago
Text
Sitting On Your Lap Headcanons (Demon Slayer Edition)
Imagine you're taller and more muscular than the others and you pulled them into your lap for comfort.
Rengoku Kyojuro: As Hashira, your schedules are always unaligned with each other, making every moment precious to both of you. And as someone who uses physical affection as a love language, you get grumpy not being in the arms of your favorite flame hashira for a long period. Once you get some time together, you're practically shining with joy; Kyojuro announcing his arrival to your estate after a long mission outside the region and offering a bento as a returning gift. As your last mission exhausted your strength, there's nothing better to do but sit down and eat with your boyfriend whilst he's "UMAI"-ing. Soon after dinner, you pull him into your lap; with a taller, more structured stature, you're able to lift heavy objects with ease. Kyojuro was practically like coddling a lion. Hands slithering around his waist, your head nuzzling the juxtaposition of his neck and collarbone. The scent of paprika, grapefruit, and charcoal seduces your nose as his hair titillated your cheek. He's so warm, like a heated teddy bear. Kyojuro breathed a chuckle at his current position; the irony of seeing a person larger and more brawny than him acting so soft and gentle is quite comedic. He loves it though, and he understands your fatigue. The both of you worked hard, especially this week. A break was in order, and by the gods above he will cherish this moment of peace as he fiddled around to a comfortable position for you.
Uzui Tengen: Obviously, this man knows a thing or two about lap sitting. Each of his wives gets pulled at least three times a week. He'd hear about it if it's less. It's the same with the wives; during moments of downtime, you'll find Tengen's relaxed form lying comfortably on one of their thighs. Mainly Hina, but Makio and Suma get their time in the spotlight too. When you became the fourth wife of the Festival God, something inside you heightened. Your anxiety. Your romantic anxiety. Hina, Makio, and Suma have all been married to him longer than you, which means more experience, which means more attraction between them, which means more affection between them—get where I'm going? And it didn't help when the three of them gawked at your tall, broad form. Well, Makio and Suma did; Hina was more intrigued than scared, but it didn't help! However, after you got used to the customs of polygamy and flamboyance, you started to feel like part of the family. Cuddle piles didn't feel like a can of sardines, and routine lap sitting became a new tradition within the family. That all started with you, of course, being the physically affectionate one besides Tengen. It was an accident, you swear! Your subconscious just pulled the flamboyant man to your lap without question. Even Tengen didn't realize what happened the moment you did it. An outing among the Sakura trees with the family turned into a little mishap once Tengen sensed his seat was magically cushioned by your meaty thighs. A shit-eating grin smirk plastered over his face as he teased you about your subtle act, querying your interior motives. Your blushing form once increases this man's smugness; you're a clingy one aren't ya? The girlies are ✨shooketh✨, well only Makio and Suma once again, Hina's chuckling at the sight. but the outing continued as usual. Now, Makio and Suma are fighting over a spot on your lap.
Tomioka Giyu: Doesn't know the slightest bit of affection. It's been years...he can't even hug properly anymore. His trauma crippled him so hard that the slightest touch of affection stops his heart longer than consecutive sneezes. However, despite his disability, Giyu is a good learner. A slow learner, but a good one. You already knew about his backstory as he trauma-dumped on you one night, so you became wary of your love language around him. Starting with gentle hand-holding to brushing shoulders to resting heads upon shoulders; smaller, subtle acts of physical affection. Giyu accepts each little act and tries TRIES to reciprocate them the best he can but he's so stiff! It's hard for him to rest his head on your shoulder since it's practically two stories above him. A normal hug stuffs his face into your stomach while you're petting his hair like a child. And it really doesn't help when Giyu looks up at you with his deadpan expression. One time, after a long mission away from home, you hobbled your way to his estate in fatigue and affection deprivation. Of course, the Water Hashira was hiding away at his home and had welcomed your arrival from the exhausting mission. In the blink of an eye, you swept this man off his feet with a beefy arm underneath his thighs and his head lodged into your neck, cradling him like a baby. Giyu's mind was fucking fried at that point. Just wide-eyed and motionless. He thought something had happened while you were away, right after he composed himself from the sudden embrace. Once the groundwork was laid, the exchange of affection flowed like a simple stream. Giyu's confidence grew a little as he started to initiate things, you're giving him more hugs and kisses now and then. The two of you built a solid foundation of love that you could stand on together. And yet that evening threw the man for a loop. Another exhausting mission led your heavy body to the Water Hashira's estate again, where the recluse was on the floor practicing his calligraphy in the dark with a single lantern. Giyu greeted your presence, asking if you were alright, and you responded with a simple "yes". Your eyebags were enough for Giyu to know you weren't completely "alright" and that food and rest were in order. However, something halted him. Picked him up. Plopped him on a cushion and ensnared him in a tight coil. His waist was wrapped and his back was heated, along with his face. Comprehending what happened, it took him a minute to realize that the culprit was his partner, and that he could no longer move from this position. Shit, something happened again. He can't even turn to face you; your face is smushed into his hair. Giyu wanted to pose a question, but even his tongue was tied. Oh well, maybe later tonight he'll ask.
Shinazugawa Sanemi: It's no surprise that Sanemi avoids any sense of weakness or dependence. A hug was a symbol of strength deprivation, a sign of frailty. He's a Hashira for fuck's sake, the last thing he needs is him laying in the arms of another person. This has really strained your relationship with the Wind Hashira. While you may not be the Love Hashira, physical affection is the only way you can express your attraction to Sanemi. No amount of compliments on his looks or physique could compensate for the chance to hold his rugged hands. His independence is as unyielding as his training style, which you interpret as "keep your hands to yourself." Your hands tremble as the two of you watch the sunset after finishing sparring.You long to hold him, to sit him on your lap and run your fingers through his hair. But you can't; to him, an embrace is like waving a white flag. Or so you thought...In private, Sanemi reveals a side you've never expected to see since your first meeting at the semi-annual Hashira gathering. When you're home, the albino spares some time for you to share a meal or just escape from the others. He may never say it outright, but Sanemi finds comfort in you and your presence. He doesn't just respect you; he admires you as both a comrade and a partner. Just keep that between us! After a blunt confession one time, Sanemi would throw an arm around you, kiss you, and hold you despite your larger frame. He needs this. He'll never express it directly, but he craves something to hold onto, and you provide the patience he needs. That gives you the opportunity to sit him on your lap. At first, Sanemi protests, asking what the heck you're doing and why you're treating him like a babydoll, then he'll squirm and demand you let him go, before eventually resigning to his fate, grumbling as he adjusts himself. You squeeze him tighter, prompting him to growl at you to stop before you break his spine. Then you call him pretty, which leaves him flustered.
Kocho Shinobu: We all love our petite Insect Hashira. Known for her quick wit and sharp mind, she could make pure nonsense sit up straight. You loved her disposition; you don't see why everyone's annoyed by her quips, they're funny! As much as she's a wind-up merchant, she's talented in her work and equally as strong as the other Hashira. God did you fall hard for her the moment she quipped about your figure in comparison to your mind. How your muscle mass seemed more prepotent than your own brain. All because you swapped her blade for yours on accident. And yet, getting berated by her made you shiver with delight. A kilig and a wanton. You were starving and she gave you bread crumbs until finally she reciprocated your feelings. From a glance, your relationship seemed like a comedic juxtaposition. The Big Guy Little Guy comedy duo. Of course, you're not a complete oaf, but as a human, you tend to make silly mistakes, to which Shinobu picks the pieces of your messes. In return, you give her full submission of yourself. Whipped and wrapped around her tiny finger. As the head of the infirmary and a Hashira, Shinobu gets very little time with you as the number of demon slayers hobbled up to her door steps missing limbs and covered in blood both theirs and not theirs. The whole solar system must be aligned in order for both of you to have downtime together. When you do, a chat over dinner allows the two of you to catch up on each other's health and daily life. Shinobu tells you the horrors of her missions and the infirmary whilst you reply with your own scary stories on the job. Once you've finished for food, the rest of the evening is yours to keep until the break of dawn, or another group of demon slayers are piled up at the Butterfly Estate. Your eyes go from watching the stars to gazing upon the beautiful Papillion. The cheeky Thumbelina quips that you're staring again, smiling at your lovestruck sight. You blush, then later grab her waist and plopped her on your lap, hiding your red face into her hair to see if it would cool off. It's humorous to look at; seeing Shinobu sitting on your lap like a throne of flesh and muscle. Just like the queen you treat her as. Giggles emit from her, quipping yet again about how clingy you are. You're making her miss you too much. And yet that is what she loves the most.
Iguro Obanai: While Giyu's reclusive was due to his introversion, Obanai's was due to just sheer disinterest and enmity towards the others, except for Sanemi and Mitsuri. He too eschewed weakness like his closest comrade, making your relationship strained. How he always presented himself made you believe you were on his hit list, even though you treated him with respect. Though he never said---or looked---like he despised you; he's seen your breathing style, your talent, and your hard work. You've trained and sparred together. He even complimented you once. Back then, it was hard to decipher him. The animosity he has grows within him like snake venom, ready to be spewed through his sharp tongue at the dilettante. But he never spat on you, which once again through you for a loop. It wasn't until you saw the Serpent Hashira leaving at your doorstep after placing something down. You caught him before he left the lot and asked about his presence and his present. Obanai merely said he was just delivering a parcel, nothing more. Though his scaly companion, Kaburamaru, spoke different words as he nipped his friend on the cheek in displeasure. You knew the snake was venomous, so you started to worry about your fellow comrade. Obanai settles you down saying that he's immune to venom. You took a breath of relief. A beckoning hand and an open door symbolize your want for him to come inside and resume the conversation. Obanai---with the glare of Kaburamaru---reluctantly took your welcome, taking the parcel with him. Once inside, the package was given to you again, followed by an explanation of "It's nothing special". Revealing the package inside was a simple necklace. Nothing special in Obanai's gorgeous eyes. You, however, treasured it dearly, wearing it immediately. Believing that was settled, Obanai began to leave in haste. But he just got here, he should at least stay for a bit. Perhaps dinner? No, he ate already. Rest? The Serpent Hashira stopped in his trace. Fine, he'll rest. For a bit. It was then you dragged him down to your lap. You absolutely did not realize how short the man was. For some, it's laughable, to find a male so strong to be so short in stature. For you, you didn't mind. More reasons to plop him into your lap. Speaking of which, the serpentine swordsman seems to be shaken by the situation. He's flustered, protesting to be off your lap this instance. But once again, Kaburamaru changes his mind as he slithers up your chest to your neck, locking the both of you in place. He's trapped, Obanai thought, and yet, he doesn't seem to mind.
Kanroji Mitsuri: Mitsuri loves everyone and everything. Of course, she has her dislikes, which are common among regular folks: demons, murderers, abusers, and people who don't respect others for who they are. The list goes on, but you get the idea. Otherwise, she loves everything, and she loves you! She adores your strength, your chiseled abs, the contours of your muscles, your unwavering kindness, and the way you carry yourself high despite the many glares and raised brows at your form. Not to mention, you're a real sweetheart. That list continues as well. Mitsuri absolutely cherishes you. So when you heard her swooning over how you soothe a crying baby during a mission together, your heart raced. She likes you; you blushed—she really likes you. Just hearing the sound of her voice makes you swoon too. You two have always been partnered up, which forced you to bond during missions. You cherished it, though; every moment you watched Mitsuri dispatch demons felt like the most cinematic scene of your life. The woman is a queen, and you're happy to kiss the ground she walks on. To show your admiration for her, you decided to cook a feast fit for a goddess. The finest ambrosia for her loveliness: Sakura Mochi, with a side of everything else. You've already invited her over, so you needed to set up quickly. She'll be back from her mission soon, and she'll definitely have an appetite. *Knock knock knock* Shit! She came back early. No matter; you just finished up and rushed to the door. Mitsuri, despite the small bags forming under her beautiful eyes, was ecstatic! You invited her over AND prepared a gift; she was giddy just thinking about it. After exchanging greetings and hugs, it was time to feast! The steam of warm, delicious food filled the room. Mouthwatering morsels gathered on the table like a communion. Mitsuri hugged you a third time, squeezing you as she squealed in delight. The two of you chatted whilst you ate together. Her smaller form sat in your lap with glee, blushing at the meaty, comfortable thighs beneath her and the warm muscular arms wrapped around her waist. And when you confessed your love for her, she's tangled in your arms yet again. 
Tokito Muichiro: 
Let's say you're both the same age.
The Mist Hashira was an estranged member of the corp. Distant, dazed, a little aloof. In the beginning, he didn't really acknowledge you much; just another comrade that was his age. Your time there allowed you to interact with Muichiro a lot since he's the only one close to your age. He never shoos you away when you join him for cloud gazing. You don't even know if he acknowledges your presence. You thought he just ignored you. Until, he started talking to you. You've never heard his voice, but God you fall in love with it. It sounded like nostalgia; mellow and soothing yet mysterious. He called you an ox; mighty and diligent yet patient. You weren't sure if it was a compliment or an insult. His deadpan expression made it hard to tell.  You wanted to hear his voice more, that sonnet of serenity. And once you saw the sheer height difference between him and the other Hashira, you wanted to cradle the boy. Swaddle him in blankets like a baby. But of course, he'll never let you do that. Until the incident at the Swordsmiths' Village. Where he truly acted his age. Muichiro's eyes shimmered like Croatian Blue Grottos. A smile graced his face when you visited him in the infirmary, greeting you like an old friend. The two of you interacted more often; inviting you to his home for origami and tea or competitive paper plane throwing. But your favorite thing to do together is cloud gazing, especially at dusk. When the sun was right and the temperature was warm, you sat at the entrance of his estate watching the clouds go by. Muichiro was inside taking a bath after a mission. Eventually, he returned to you, freshly clean and dressed in his casual yukata. He joined you at the side for a bit until you decided to take the small boy into your lap. You held him ever so softly. Carassed his gorgeous hair and kissed his forehead whilst calling him pretty and cute. He's so tiny compared to you; seeing his hands engulfed by yours made you squeal internally. You couldn't see it, but Muichiro's flustered stammer indicated his embarrassed state. You loved your little Mui so much.
144 notes · View notes
chronicreativity · 5 months ago
Text
masterlist of places to submit creative writing
it's intimidating thinking about submitting your precious work to judgement, but all the rejections are worth it when you finally get that one glowing acceptance email that puts your anxieties and impostor syndrome to bed. but where do you submit? it can be incredibly overwhelming trying to find the right sites/journals/zines to submit to so i thought i'd create a little collection of places i have found to submit to and i will update it whenever i find new discoveries.
PROSE ONLY
The Fiction Desk
They consider stories between 1k words and 10k words, paying 25 GBP per thousand words for stories they publish and contributors receive two complimentary paperback copies of the anthology. (A submission fee of 5 GBP for stories which sucks)
Extra Teeth
Works of fiction and creative nonfiction between 800 and 4,000 words receive a 140 GBP payment upon publication in the magazine as well as two copies that feature your work. If your work is selected to published online, you get 100 GBP instead. A Scottish based publication that also offers mentorships to budding writers. (Free)
Clarkesworld
Fantasy and sci-fi magazine accepting submissions of fiction from 1k to 22k words, paying 14 cent per word. Make sure you read their submissions page carefully, it gives you a good idea of what they're looking for and what will get you one of those disheartening rejection emails. (Free)
Granta
Open to unsolicited submissions of fiction and non-fiction. Unfortunately they do charge a 3.50 GBP fee for prose submissions, but they do offer 200 free submissions during every opening period (1 March - 31 March, 1 June - 30 June, 1 September - 30 September, 1 December - 31 December) to low income authors. No set minimum or maximum length, but most accepted works fall within 3,000 and 6,000 words.
Indie Bites
A fantasy short fiction publisher looking for clever hooks, strong characters and interesting takes on their issues' themes. Submissions should be no longer than 7,500 words. You get an honorarium of 5 GBP for each piece of yours that they publish - it's not much, but yay money! (Free)
Big Fiction
Novella publishers (7,500-20,000 words) looking for self-contained works of fiction that play with things like the linearity of narratives, perspective, structure and language. (Free)
Strange Horizons
Employing a broad definition of speculative fiction, they offer 10 cents a word for spec fiction up to 10,000 words but preferably around 5,000. (Free)
Fantasy and Science Fiction
They publish fiction up to 25,000 words in length, offering 8-12 cents per word upon publishing. (Free)
Fictive Dream
Short stories from 500 words to 2,500. They want writing with a contemporary feel that explores the human condition. (Free)
POETRY AND PROSE
eunoia review
Up to 10 poems in a single attachment, up to 15,000 words of fiction and creative non-fiction (can be multiple submissions amounting to that or a single piece). It's free to submit to, and they respond in 24 hours (I can vouch for that).
Confingo Magazine
Stories up to 5,000 words of any genre and poems (a max of three) up to 50 lines. Free to submit to and offer a 30 GBP payment to authors whose work is accepted.
Grain Magazine
Another Canadian based publication also supportive of marginalised identities. They accept poems (max. of six pages), fiction (max. of 3,500 words) or three flash fiction works that total 3.5k, literary nonfiction (3,500 words) and queries for works of other forms. All contributors are paid 50 CAD per page to a max of 250. Authors outside of Canada will need to pay a 5 CAD reading fee but they do offer a limited number of fee waivers if this impacts your ability to submit.
BTWN
An up-and-coming lit mag looking for diverse works that play with genres, breaks the rules and is a little weird. They want what typical lit mags reject. Stories up to 7,000 words, non-fiction up to 7,000 words and up to 4 poems totalling no more than 10 pages, hybrid work, comics/graphics up to 5 pages, original periodicals up to 14,000 words of prose or 20 pages of poetry. (Free)
Gutter
Accepting submission in spring and autumn work that challenges, re-imagines or undermines the status quo and pushes at the boundaries of form and function. If your contribution is chosen, you get 30 GBP for your work as well as a complimentary copy of the issue. Up to three poems (no more than 100 lines), fiction and essays (up to 2,500 words)
Whisk(e)y Tit
This one's worth checking out just for their logo. They're looking for fiction whether it's short stories, flash fiction or novel excerpts up to 7,000 words, up to 5 poems, up to 7,000 word essays, screenplays and stage plays (can be full works or excerpts up to 20 pages). (Free)
FOR QUEER AND MARGINALISED WRITERS
Plenitude magazine
A queer-focused Canadian literary magazine accepting poetry, fiction and creative non-fiction. They define queer literature as create by queer people. (Free)
Lavender Review
Poetry written by and for lesbians. An annual Sappho's Prize in Poetry takes place every October. (Free)
AC|DC
"A journal for the bent", always open for submissions from queer writers of all experience levels. They lean towards dark and raw writing but are open to everything as long as it's not over 3,000 words. (Free)
Sinister Wisdom
A literary and art journal for lesbians of every background. They accept poetry (up to 5), two short stories or essays OR one longer piece (not exceeding 5,000 words), as well as book reviews (these must be pitched before they are submitted, (Free)
Queerlings
Open annually from Jan 1st to March 31st they publish short stories of any genre (up to 2,000 words), flash fiction/hybrid work (500 words), poetry (up to 3 poems per submission with a 20 line maximum on each) and creative non-fiction (2,000 words) written by queer writers. (Free)
underdog lit mag
Based in the UK, they focus on amplifying emerging and underrepresented writers. If you're female, POC, LGBTQ+, working-class or all of the above with a story of 100-3,500 words that fits their flavour of the month (the last flavour was Magical Realism) send it their way! (Free)
fourteen poems
London-based poetry publishers looking for the most exciting queer poets. You can send up to five emails to them within their deadlines and you get 25 GBP for every poem published.
Froglifter Journal
A press publishing the most dynamic and urgent queer writing. Poets send in 3 to 5 poems (max. 5 pages), writers send in up to 7,500 words of fiction or non-fiction or three flash fiction pieces, and cross-genre creators send in up to 20 pages within the submission windows March 1 to May 1 and September 1 to November 1. (Free)
OTHER SOURCES
Short Stories: X | X | X
Poetry: X
127 notes · View notes
fxtalitygod · 2 years ago
Text
VIII. ~Survival~
Tumblr media Tumblr media
Summary: You were determined to survive longer than anyone, even if you were set to marry him.
Genre: Historical AU, angst, mature, suggestive, arranged-marriage
Warnings: Dark themes, gore, graphic imagery, theme/depictions of horror, swearing/language, suggestive, pet names (Little Flower used 5-6x) implied harsh parenting {on Sukuna's end), mentions of adult murder, implications of impregnating, implied Stockholm Syndrome, images/depictions of dead bodies (both human and animal), child death/murder, character death(s), slight misogynistic themes (if you squint), NOT PROOFREAD YET (sorry ;-;)
Word Count: 6.5k
A/N: For starters, I want to clarify that I am choosing to purposely not mention the names of the twins. Although this makes it difficult on my end, I wanted you, the reader, to decide on the names of your choosing while reading.
P.S. This is the longest chapter I have written. Sorry it took so long but I hope it proves well and worth the wait. (╥﹏╥)
JJK Mlist•Taglist Rules• • Pt.I • Pt. II • Pt. III • Pt. IV • Pt. V • Pt. VI • Pt.VII • Pt. VIII • Pt. IX
Tumblr media
You could see the fire, smell the blood, and hear their screams as they begged for mercy. They cried out for their children and loved ones whose bodies were now burning in the roaring flames, reduced to cinders and ashes. Those who threatened to charge were killed before they could make contact, their body contorting in ways the human form was incapable of, causing cries of pure agony as they were left to bleed out in their mangled state– they were left to suffer in their pain as the life slowly drained out of them. If a suffering soul was fortunate, the fire would catch them aflame and kill them faster, or debris would land in a fatal spot or crush them whole to end their misery.
Viewing the demolished structures and flaming bodies, both dead and alive, was a petrifying view– yet you felt nothing. Your breath was methodical, your expression blank, your body unmoving. Pity and remorse were thrown out the window– fear and anguish had long vanished; however, anger and resentment lingered like a tiny flickering flame that continued to grow with each crumble and cry that could be heard.
Although your exterior appearance seemed calm and collected, your heartbeat said otherwise as it accelerated, pounding against your chest so hard you could eventually drown out the hollars of distress with its rapid thumping.
“Mama, look!” Two voices sounded.
Your breath hitched as the familiar calls rang through your head. The pounding in your chest quickened and strengthened when the footsteps got closer. Hearing their giggles and whispers caused your form to tense– not having the strength to say or do anything. How would you explain your current position? How would you tell them tha-
“Mama, are you alright?”
You snapped out of your daydream to see you were in front of the stream, taking care of your personal tasks, this chore being the cleansing of garments. The query of when you arrived there was unknown, but you would assume it had been for way longer than you should have resided in that area. The dreams you would endure during the solace of night, despite those nights being anything but comforting, had begun bleeding into the day and becoming more prevalent and gruesome. It was becoming quite the distraction.
"Mama?"
Before you could allow your thoughts to consume you, you focused your attention on your son and daughter, who were awaiting your reply with innocent eyes. Yeah, their virtue never ceased to amaze you. They were too good for this world– their empathy brought light to your soul that you believed had burnt out long ago– pride and joy.
You looked at your twins with an awaiting gaze as you watched their expressions turn into excitement at the realization they had caught your attention. You blinked once before being met with a piece of parchment littered with ink. It did not take long to realize that the twins had made you something in their short time away. Blinking up at the two, you gave them a fond grin before looking back down at the material. Upon viewing the parchment, you saw an image of what you assumed to be an image of a bird, and next to the picture was a small note.
" To show gratitude to our dearest mother," you read aloud before holding the small gift to your chest, "Thank you, my loves, it is lovely."
The joy on their faces from the small compliment warmed your heart, referring to your previous statement of them being too good for this world. There were moments when you could not believe that the twins were a product of you and Sukuna– that was a reoccurring thought you had often. They were, without doubt, your most significant and last blessing as things around the temple had not been going as smoothly as they once had been the first few years you resided in it, and it was clearly starting to take a toll on everybody, including you.
"Mama, guess what we learned today?" Your son exclaimed excitedly, causing you to jump a little, not expecting the sudden outburst of enthusiasm.
"Was it penmanship because the both of you are getting better. Have you been practicing like I have told you to?" You joked, poking at their bellies, causing them to giggle.
"No, Mama, Father taught us about Jujutsu!" your daughter shouted enthusiastically.
"Hey, I wanted to tell her," the boy pouted.
"Sorry," your little girl apologized as she turned to look at her brother with an apologetic look.
The sibling tried to look upset, not wanting to give in quite yet, but when he turned around to look at his sister's guilty expression, he launched to hug her. If you had said it twice, you were to state it a third time– the world did not deserve this pair– you could not stress that enough.
"Did he now?" you breathed, your anxiety slowly creeping to the back of your neck like it did so often.
You were aware of the agreement you made with Sukuna all those years ago, and as of things so far, you both were holding up to your ends of the deal. The twins continued to be educated under your supervision and occasionally your attendant. Your little girl and boy were now at the ripe age of six, at which they would begin manifesting their cursed energy, so they were now taking lessons under their father's supervision– that notion made you apprehensive of your deal.
As you previously mentioned, things were not going as smoothly as they once were. Your village has become slightly non-compliant recently. The traditional wedding ceremonies had stopped a little over a year ago as families started refusing to hand over their kin to Sukuna. Despite the disrespect, Sukuna had no care as he had plenty of women to satisfy him; however, to say that he was taking the rebellion lightly would be a complete lie. Over the last few years, more guards were posted for precautionary reasons. Nothing major had happened yet, only the occasional distant and muffled voices chanting in protest.
With such circumstances, emotions were running high, and the crowd only seemed to get bigger as the days passed. You could admit that some days were worse than others, but it did not change the fact that these events could cause a catastrophic resolution at the hands of your husband. Viewing the situation, there was no question that Sukuna would be more occupied than usual; however, it was not amid meetings or trivial tasks but with his children instead.
Sukuna could hardly be viewed as a legitimate father but rather a mentor– a cruel one based on the round, tear-stained cheeks that would walk into the garden after they had spent their designated time with their dad. The only children who seemed the slightest bit content with their learnings were your son and daughter. Your twins have not been training for long, but they had outlasted most other kids regarding their spirits breaking. The first day your little boy and girl had left to meet with Sukuna, you could not help but feel nervous; however, when they came back, they were all giggles and smiles as they told you of their time with the man they call father. To say you were shocked was an understatement, but despite that astonishment, you were simply glad they left a good impression and walked out unscathed, their spirits still intact.
"So, have your studies with your father come to fruition yet?" You asked, not thinking of your wording as the question effortlessly slipped from your tongue.
"Come to fruition?" your son repeated, looking at his sister to see if she understood the meaning of your words.
Despite your children being clever, they were still young and naive, and that naivety could not help but make you laugh gently as you watched them whisper to each other as they tried to decipher the saying. They paused in their little hushed conversation at your breathy giggle, flustered as they looked at you, hoping you would grant them the knowledge they wanted.
"Mama, stop laughing. What does it mean?" the two whined in sync as they looked at you with awaiting eyes.
"Alright," you managed to say between your little fits of giggles, "It means to succeed in the progression of a goal. In this case, did you reach the intended goal of your lessons today?"
Your twins thought over your words for a minute before a look of realization washed over their faces. The two looked at one another to make sure the other understood, finding they were both on the same page before turning to your now-awaiting gaze. Smiles were once again plastered to their expressions of proudness.
"Not exactly," your daughter stated.
"What do you mean, 'not exactly'?" you questioned with a raised brow as you looked for an answer.
"Well...we do not have cursed energy yet, but Father said it was okay because we will..." Your son trailed off before looking at his sister for assistance, trying to remember the exact words Sukuna had used.
"Manifest!" your daughter shouted in revelation after a moment of thought.
"Oh yes, manifest! He said it was okay because 'we will manifest our cursed energy soon enough,'" your son finished, ignoring the distant whispers and tiny gasps that had suddenly emerged from the surrounding women and children.
"And you both will, I am sure of that– my intuition is never wrong," a deep voice resonated behind the twins.
You froze as you looked up to see Sukuna looking down at you, a proud grin on his face as he let the words settle. Your smile had long disappeared, your lips forming into a tight line as you met his gaze. His presence was not what had upset you as you had grown familiar with his company and unexpected visits, but rather the fact that you knew he was right.
"Father!" the twins shouted, bowing before going in to hug his legs, looking up at him with their innocent doe-like eyes that shone the color of your own hues, little flecks of what seemed to be crimson could also be seen if the light hit them just right.
Your heart stopped for a second as you watched your four-armed companion freeze on the spot at the sudden attention. Although you knew Sukuna could not lay a hand upon his children due to the contents of the pact you had made with him, it did not eliminate the uneasiness you had, worried of the thought he would grow to distaste them. The curse-user was not a man of tenderness nor liked to be presented with such fondness, especially from his offspring. There was no room for weaklings in his realm, in hid brigade of suitable heirs.
You sit there, waiting for his reaction, chewing on your lip to the point it draws a small amount of blood. The man stood stiff, looking down at the two smaller beings that clung to his legs in a warm greeting before moving to bend down, causing your heart to spike in rhythm. The questions flooded your brain once more like they often did when it involved your significant other's actions. Sukuna took a set of his arms, placing one on each twin's back before meeting their eye level.
"Did I ever indulge either of you with the story of how I found out about your mother's conceiving of the both of you?" Sukuna asked, an arched brow with a devious smile as he switched eye contact from one twin to the other.
"No," your son replied honestly, curiosity gleaming in his eyes.
With that short answer, Sukuna looked at you, a mischievous glint in his eyes before redirecting his focus on his kids once more.
"I knew that your mother would one day bear the fruit of her fertility, but there was one particular evening where I could sense an odd presence. I immediately called upon your mother, and when I was met with her physique, I could tell she was with child. It would have been unnoticeable, but my perception is unlike the average man. Looking at your mother, I could see her stomach was softer and slightly rounder, her ankles somewhat swollen, and her breasts enlarged."
You held back the bile rising in your throat as your husband explained his side of the story you knew all too well, remembering the exact events that led up to that day. His vulgar description of the event sickened you to the core.
"Your mother was unaware of her condition, but I was. The moment I felt her stomach, I could feel the presence of not one but two essences in her womb. I remember the look on her face when I told her– pure shock."
Sukuna's words offended you because pure shock was an understatement. You were undeniably mortified that day, but you would never admit that to your children. For their happiness's sake, you were willing to push the bitter memories of your pregnancy aside. They did not need to know your previous disdain for them– you had not even met them yet. What they did not know could not hurt them.
"How could you sense both of our essences?" Your daughter questioned, tilting her head as Sukuna focused his attention on her.
"Always the curious one, aren't you?" Sukuna noted, a teasing grin forming on his face.
"Mama says it is always best to stay curious because you will never learn anything new if you are too stubborn or scared to keep asking questions."
"Did she now?" Sukuna's grin grew wider as he drew his attention back to you, "And what do you believe that is a lesson of?"
"Fearlessness?" your daughter answered hesitantly.
"Close, but not quite," Sukuna started, "She is teaching you confidence."
"Is that not the same thing, Father?" your daughter questioned again.
"Not exactly, my child," The curse-user paused, looking at you for a fleeting moment before continuing, "being fearless is alright in certain circumstances– something as frivolous as a mouse is something to lack fear of, but there are certain things you should fear. Fear, my child, is what keeps you alive; however, it can be crippling at times. It is the confidence to overcome those fears that lets you survive."
"Why have you come here, Sukuna?" you suddenly asked, becoming tired and uncomfortable with his lingering presence. You knew that the man had not come for idle conversation and to share invasive stories nor explain your teachings.
Had your twins been any older, they would have caught onto your passive aggression as you addressed their father, staring at him blankly as he drew his attention to you. You were aware of the line you were crossing, aware of the hostility you were presenting in the presence of your children, despite the obliviousness of it, but with high tension in the temple and his sudden visit, you felt you had every right to feel uneased. Sukuna's gaze turned from teasing mischief into a grave look.
"Well, Y/n, I wish not to sully our bonding with grave matters," the man spoke, returning your passive-aggressive tone, "we'll speak of it later."
"So why did you come, father?" Your boy asked, looking up at the tall man.
"Must I have a reason to visit my kin?" Sukuna teased.
"Well, we do not see you much outside of lessons," your daughter jumped in with her own comment.
"Observant as well, huh?" Sukuna huffed, pausing for a moment before speaking up once more, "I was wondering if you both would accompany me on a hunt?"
That question caused their little orbs to light up, their little heads turning to you, silently begging for your approval. Looking at their pleading eyes, you could not say no, giving a nod of approval. If they were cheerful before, they were exhilarated now. These kids were to be the death of you if a simple pair of puppy dog eyes could make you cave like this, and you were okay with that.
"Can Mama come too?
Your blood ran cold at the mention of your name. There was no particular reason to be troubled, but at this point, it was a habit for these tense feelings to rise whenever your name was mentioned. So, as you look at your supposed significant other, you could feel yourself about to explain how you had other activities to attend to.
"I do not see why not."
Now, that was unexpected.
The words you were going to speak paused in your throat, swallowing them down when your little boy and girl rushed up to you after hearing Sukuna's approval, hugging you as they tugged on your hands to stand. What was he playing at? Despite the inquiry of his intentions, you had to push it aside as you saw the thrilled look on your children's faces–they most likely wanted to show off what they had learned while spending time with their father. They always returned with smiles of pride after spending time with their dad. You would give up your life to see them smile at you like that for as long as you lived, so you followed them as they walked beside Sukuna despite your own apprehension.
Time slowly passed as you trekked quietly through the nearby woods, watching Sukuna's movement as he led the three of you through the brush, pausing when something caught his eye. It took only a moment for a bow to appear in his hand, but when you had expected him to use it, he motioned over to your son, giving the child the weapon. Every motherly instinct told you to confiscate the bow, but quickly reminded yourself of your pact both in regards that Sukuna was bound to protect your children from harm and that you had accepted he could use any training methods he deemed necessary– this being one of them.
Sukuna was crouched the lowest he could get, arms hovering over your boy's form, guiding his son while speaking in a low voice as the two focused on the prey ahead. Looking into the small clearing, you could see a few grazing rabbits, clueless and defenseless to the threat before them, nibbling on the dewy grass. The bow's snap and the sight of an impaled rabbit caused you to return from your light daze, turning over to see your son smiling in excitement.
"Did you see that, Mama? I did it!" the boy beamed, maintaining a hushed voice.
You gave your son a warm smile, nodding in reassurance before watching your son switch places with your daughter. The rabbits that previously remained in the clearing had run off, but one straggler emerged from bushes, unaware of what had occurred, clueless about its impaled companion. In a mere few moments, the creature suffered the same fate as the previous one, bringing joy to your little girl. She turned to you with the same smile as her brother's– it frightened you.
You had no doubt that you loved your children for who they were. You loved their innocence, passion, and joyful nature, but a realization had dawned upon you in these moments– one that made your heart drop to your stomach.
"Mama, you try!" your daughter called out, grabbing your hand as she led you toward a better spot to shoot from, that spot closer to Sukuna.
Their reason for upbringing would be to take their father's place, to be his heir, and Sukuna was not giving that role to a charitable and naive son or daughter. Things seemed pleasant for now, and your children might keep their nature through adulthood, but one thing was for sure. Whether they stayed that way or not, they would feel justified in their actions– believe what they were doing was good because that is what their father was teaching them, and you were enabling it.
"Darling, I'm not sure that it would be wise for me-"
"I think it is a marvelous idea," Sukuna interrupted, standing from his crouched position and grabbing your waist.
You felt the man's hands slither up your body, messing with the material of your clothing before touching your flesh. Your skin burned unpleasantly as his hands settled, a faux attempt to adjust your form when you were capable; however, with your twins present, you would not dare cause a stir. Looking at the clearing, there was nothing seemingly there as all the critters that previously inhabited it ran off.
"There's nothing for me to target, so maybe we should end this," you suggested, trying to excuse yourself from this activity, keeping a low tone.
"If nothing is there, why do you whisper, Little Flower?" Sukuna responded in a hushed voice, feeling his smirk form as his face rested against your cheek.
Before you could respond, the sound of fluttering was heard. Without thought, you lifted the bow's angle, shooting the arrow into the air– a thud sounded shortly after as whatever you had shot hit the ground. Looking down, you could see a bird skewered with an arrow, blood pooling from its limp body and staining the grass surrounding it.
"Mama, you did it!" the twins exclaimed, thrilled you had participated.
Their sounds of excitement were drowned out by the ringing of your ears as your gaze lingered on the deceased animal. What had you done? Yes, you had viewed death without so much as a flinch, but you were not the one with blood on your hands. You were unaware you could perform such an action– you had never held a weapon before, only a mere kitchen knife.
It disturbed you.
How did you kill the helpless creature so instinctively? So effortlessly? The worst part is...
It felt good.
The ringing eventually subsided as the bow settled to your side, turning your head toward the two-faced man you called 'husband' and handed it to him. Thankfully, Sukuna took the item with no smug remark or wicked grin, giving you one of his infamous blank looks before moving his gaze toward the kids, motioning for them in the direction of the temple, settling one of his hands at the small of your back as you all started the walk back.
Making the hike back, you settled on your earlier realization regarding your children. You would love them until the end of time, and you had no doubt about that; whether they were inherently good or bad– you would love them. But now, as you continue to think, all you can think about is the future. Where would you and your twins be standing in the years to come? What kind of life would you three indulge in if you were all to live? How many bodies would have to pile under your feet before you were guaranteed genuine safety for you and them?
For the years under the same roof as Sukuna, you had been focusing on your mother's words, the promise you had made to her.
"I promise I will survive– longer than anyone."
Your life had been summed up by that promise. So far, you have kept faithful to it because you have been surviving. From your wedding day to your pregnancy, to the many inspections you attended, all up until now, as you approached the temple, you have been surviving. You played all the right cards to get you here and made all the right sacrifices to keep your children alive– what more could you ask for? You were alive and breathing along with your children, and that is all that truly mattered, right?
No.
You may have been playing this game of survival and have been successful thus far, but there was one thing you had failed to do...
Live, you had failed to truly live.
You have played your part in your husband's sick game. You married him, gave him your purity, gave him children, and now you were done. You were more than aware of the pact you had made with your husband, but almost every contract had a loophole whether it could be seen or not.
"We are relocating."
Your heart rate accelerated as Sukuna bent down to whisper those words into your ear, the words taking a moment to register. Was it out of fear? Anger? Possibly both? No. It was excitement. You had given your word that you would never leave the temple unless it was under Sukuna's supervision and say so. Unless he accompanied you outside those gates, you would remain here; however, you had never given your word to stay by his side.
You had given your word to stay at the temple.
The curse-user had just given your confirmation of freedom without being aware he was doing so.
"May I ask why?" you dug, trying to keep your composure to not seem suspicious, as if he could tell what you were thinking if you had shown the slightest emotion.
"I have simply grown bored of this place, plus I have got what I needed from these people, and they all stand right here before me," Sukuna explained, the last part of his statement being clear that he was referring to you and the twins.
"Where would that leave my village?"
Now, that was a genuine question. You were not as concerned for your village but rather your family instead. The four-armed beast of a man was not known for leaving a town so quietly– you had heard plenty of notorious stories from survivors to prove that.
"What of it?"
"Will it remain in one piece, or will it be returned to the dirt?"
"That entirely depends on them, Little Flower."
The answer was vague– it was neither a confirmation nor a denial, but you could understand the meaning behind his words. For the sake of your family, you hoped that the village elders would not perform anything stupid. You hoped they could shove their egos aside and let Sukuna leave the town with what minimal disturbance he was willing to make. Everything you have worked so hard to achieve would be ruined without their cooperation.
Approaching the temple, you could not help but feel the delight swell in your chest. After years of this torment, this unjustified punishment, you are finally going to be free. You have survived, and now you will live. The journey has been difficult, but now you will achieve the tranquility and normalcy you deserve. Your children will have the chance to live a standard and carefree life, unlike the competitive and tiring one they would achieve with their father.
It was finally over.
Arriving at the temple did not feel as bitter this time, watching your children running to your attendant as she greeted you all, giving a respectful bow before taking off with the children, most likely heading off to eat. It was quiet as you stood in the garden; everyone else had gone to fill their appetite– it was just you and Sukuna.
"What has you smiling so brightly, Little Flower."
You had not noticed it, but you had plastered a broad, foolish grin onto your face. Usually, your partner catching this would have brought you anxiety as you thought of the right words, but you did not feel that way– quite the opposite. You were proud that he had noticed, allowing your smile to grow wider.
"I feel like a burden has been lifted off my shoulders, and I cannot wait to leave this place."
"I am glad I could bring such relieving news and bring a smile to your face," Sukuna responded, smiling down at you before taking your chin between his fingers and bending down, "Once you put the children to sleep, come seek me out as we have much more to discuss."
You could only smile stupidly, nodding and allowing Sukuna to kiss you before heading to your children. You did not care what the two-faced monster had to share with you, but you would indulge him because this would be the last time you would ever have to.
You were free.
"Oh, hello, Y/n-sama! We were just finishing our meals. Should I fix you something as well?" your attendant offered, keeping a light-hearted tone.
The young woman had grown more confident with you over the years. The two of you had grown quite close after the birth of your children– she was the only person you full-heartedly trusted with your kids. Maybe you would take her with you in your escape; she was far too good to serve ungrateful and bitter women.
"No, thank you, I am not that hungry; however, I have grown rather tired, meaning it is time for bed."
"Awwwwww," you twins whined in unison, looking at your attendant with puppy dog eyes, hoping she could convince you, only to receive a shake of her head.
The twins stood begrudgingly, approaching your awaiting stance, giving you the same desperate eyes. You gave your own silent response as you offered a warm smile and a quick shake of your head before having them follow you down the halls. In any other scenario, you would have in, but things were different now. Your children need to be well-rested for the upcoming events. You were going to give them the life they deserved.
Arriving at their sleep quarters, you slid the door open, allowing the twins in first before following. Before closing the door, you took a peek out into the hallway to make sure no one was approaching. Once you deduced nobody was coming, you slowly and quietly slid the door shut, quick to approach your kids' bedside.
"Mama, do we have to go to bed?" your daughter whined.
"Yeah, do we really have to?" your son followed.
You could not help but lightly chuckle at their resistance to sleep. Your heart filled with warmth as you remembered sharing a similar moment with your mother. There were many occasions they reminded you of yourself, and you could not wait to see more of those similarities manifest when you leave this temple. You could not wait to give them a regular and well-deserved life.
"Yes, you both have to rest. You two need to preserve your energy for the days to come."
That statement piqued their interest, their faces perking up with intrigue.
"What is to come, Mama?" the twins sounded in unison like they did so often in these moments. Sometimes, it was almost as if they shared the same mind.
"Well, soon enough, you will get to meet your grandparents," you whispered, "you cousins, aunts, and uncles, all from Mama's side of the family."
"Really?!" the two shouted, settling down when you gestured for them to lower their voices.
"Yes, but do not tell your father, it is..." you trailed, picking your words carefully, "a surprise visit just for the three of us, and I do not want him to feel left out."
There was no doubt that you despised Sukuna in every sense of the word, but you did not wish for your children to hate him. Believe it or not, you wanted your twins to paint a good picture of their father, and whether that picture remained clean was up to Sukuna himself– you would not tarnish his name for him.
"Okay, Mama, we promise we will not tell." your son spoke for the two of them, his sibling nodding in turn as she motioned to seal her lips.
You smiled, whispering a small thank you before kissing the top of their foreheads and letting them rest. You stood quietly, blowing out the candles illuminating the room before leaving. Once you stepped foot into the hallway, you were startled to see a guard, a familiar one at that, though he had clearly aged with time.
"Y/n-sama, I have been instructed to take you to your sleeping chambers," the male spoke before swiftly turning on his heel to lead you to your room.
The man's voice was cold and almost distant as he spoke to you, but his voice was familiar. You were acquainted with most of the staff within the temple, but you could not remember where you had met him in particular, though he seemed familiar and significant. Your face contorted as your mind pondered, trying to recognize his face in your personal timeline, but nothing came to mind.
"Your wedding night," the guard spoke suddenly, noticing your expression of thought, "I held and guarded the door during your wedding night."
You thought back to your wedding day, and it suddenly hit you. The guard was the same one Sukuna had forced to watch the consummation of your marriage. You quickly grew flustered at the memory, clearing your throat before speaking.
"I recall now," you responded, your voice barely above a whisper.
"Are you happy, Y/n-sama?" another unshakable tone as he questioned you.
Why was he asking this?
"Yes, I'm happy."
You did not know what this man was playing at, but you did not want to fall into any traps, so you gave the preferred answer when this question was presented to you on many occasions.
"Even though you have suffered all these years, bearing and raising his offspring?"
"Excuse me?" you grimaced at the guard's words.
"Nothing, I am sorry, I have overstepped my boundaries. I will leave you now," the man uttered, leaving you at the doorway to your sleeping quarters.
You narrowed your eyes, staring as the male's figure grew smaller in the distance. What did he gain from that interaction? No matter– it was no longer your problem to deal with. Collecting yourself, you entered the room and immediately faced Sukuna.
"Come and close the door. We must speak of these urgent matters in private," Sukuna muttered as he blankly stared at the wall in front of him.
You did not question the man and slid the door closed, approaching him as he turned to you. Before you could speak, Sukuna placed a pair of hands on your shoulders, looking into your eyes. His gaze held no emotion you could directly name, but you could sense an urgency in his tone as he spoke to you.
"We leave tonight. The others have been informed and are gathering their belongings– I advise you to do the same."
"What?! Now?! Sukuna, what is going on that you are not telling anyone?" you urged, staring at him with wide eyes.
"Now is no time to be questioning me, Y/n. Hurry, we are leaving shortly."
"No."
The word slipped out without thought. You did not care when you left because your plans would not change, but your partner was acting strangely, and you could not help but be curious as to why. The curiosity is what led you to stand there motionless as your husband stared you down.
"Stubborn as always, I see," the curse-user muttered, "Fine, you want to know, huh? We made a pact, and I'm upholding the bargain. You told me to protect those children, right? Well, for their interest, we are leaving, so be grateful."
You stood there silently, looking into Sukana's unwavering gaze.
"What is going on?" you repeated the question.
"Your village plans to lay siege, and we are leaving to not get caught in the firing radius."
That explained the tensity and whispers among the temple. That explained the extra protection. Everything now made sense and you could not help the feeling of something rising up your throat.
Laughter.
You laughed uncontrollably, trying to cover your mouth to muffle the outburst, but to no avail. Nothing about the situation was logically funny, but you could not control yourself.
"After years of torment, they only now decide to lay siege?" you cackled, "And the best part is that Ryomen Sukuna is fleeing with his tail between his legs."
You should have seen what was to come next when you made that last statement, feeling your hair being tugged to look up at the man you had insulted. Your laugh quickly subsided, swallowing the lump in your throat as you stared into his orbs. You had crossed a line this time, but for once, you were not scared of the intimidation; however, what had shocked you was Sukuna smashing his lips against yours.
"I am the most feared man in Japan– I have no reason to be scared, at least for myself. I am doing this for us and our creation because I love you, Little Flower."
"You do not love me. You love what I can do for you, Sukuna."
"I see where our children have gotten their observance." Sukuna joked, "But you are not entirely wrong. However, that does not change the fact we are leaving right here and now so collec-"
"AHHHHHHHHHHH"
The deformed man paused mid-sentence at the high-pitched scream, storming out of the room to see the commotion. You wasted no time in following him, walking down the hall before being met with the stench of blood. Had one of the pregnant wives gone into labor? Was someone injured? Or was...
Before you could finish that last thought, you were met with the sight of a lifeless body surrounded by its own red fluid. It was disturbingly familiar, and that was because it was the body of the guard that had escorted you earlier. You were shocked at his mangled state, his face just barely beyond recognition, but before you could allow the shock to settle in, another sound of screams was heard in the opposite direction.
Without thought, you bolted in the direction the screams came from. You flew past those blank walls faster than you knew you were capable of before landing at the sight of another body surrounded by women. It was your attendant, her face frozen in fear, her body almost in the same state as the previous one. This death hit you harder than the earlier one as you covered your mouth, keeping the bile from rising up your throat.
Despite the grief and sickness you were feeling, you could only think of one thing, and that was your twins. You lingered for a second longer before running to your twin's bedroom. You had not noticed, but Sukuna trailed behind you closely as you sprinted through the temple. Your breath was running ragged, but you would be damned if you were to leave your twins behind in this gruesome mess.
You made it to the door, sliding it open and rushing in, your eyes scanning the room for your twins, but they were nowhere to be seen. Your heart hammered against her chest as you began to panic, turning to Sukuna to see that his face was once again blank as he looked into the room from the doorway. Why did he have that look on his face? It did not matter– you had to search for your children. You turned to look back into the interior room, looking up from the bedrolls to be met with the wall, and heard the sound of a scream once again, your heart dropping.
You had found your twins hanging from the wall, a message written above them that was written in their own blood.
"Bring back our daughter."
Tumblr media
Taglist:
@littlemochi @mistalli @youngbeansprout @bbylime @bangtan-forever1479 @idktbhloley @izayas-rings @o3o-aya @pyschopotatomeme @persephonehemingway @otomaniac @meforpr3sident @alurafairy @nezuscribe @my-simp-land @zukuphilia @niya729 @spiritofstatic @bbittersw33t @kashasenpai @decaysan @honeybaegle @ygslvr @outrofenty @gojosluts7789@all4koo@hyperfixationsporfavor
635 notes · View notes
not-glorfindel-stop-asking · 4 months ago
Note
Lindir, as Rivendell’s foremost authority on order, aesthetics, and the general well-being of everyone who insists on making your job harder, I come to you with a question of unparalleled importance. Nay, a dilemma of cosmic significance.
In your esteemed and undoubtedly objective opinion, who is the handsomest Elf to walk these lands? And I don’t mean ‘oh, they have inner beauty’ or ‘all Elves are fair’—no, no, I’m talking cold, hard, superficial attractiveness. The kind that makes even the wisest of sages pause mid-lecture, the kind that could cause an Orc to rethink its life choices, the kind that makes bards insufferable for centuries.
Are we talking the effortless, golden-haired drama of Glorfindel? The refined, brooding excellence of Elrond in his ‘exhausted single father who still looks incredible’ era? Does Legolas count, or is he automatically disqualified for being too much of a poster boy? Is Thranduil on the list, or do his personality and overwhelming aura of ‘I’m better than you’ cancel out his cheekbones?
I need the definitive ranking, Lindir.
If anyone can provide a structured, fully annotated, aesthetically sound judgment, it’s you. And if you say ‘beauty is subjective,’ I swear by Eru, I will cause a minor administrative disaster in Rivendell just to see how fast you notice.
Ah.
*Stops meticulously reorganizing scrolls*
You come to me with a question so grave, so unfathomably dangerous, it could cause diplomatic incidents spanning ages.
A query that could ignite wars between realms, have poets weeping into their parchment, and force me—me—to render a judgment that may never be forgiven.
And yet, here I stand. Bravely. Heroically. Willing to risk everything for the sake of objective truth.
You wanted superficial attractiveness? Cold, hard, devastating beauty? Very well. Prepare yourself. This will be petty. This will be unapologetic. This will be factual.
✨ THE DEFINITIVE RIVENDELL-RATIFIED ELF HOTNESS RANKING ✨ (By Lindir)
(Yes, I gave it an official title. You’re welcome.)
LAST PLACE: Thranduil Yes. Thranduil. Come for me, Mirkwood. I fear you not. While no one can deny the impact of those cheekbones—so sharp they could likely cut mithril—the sheer audacity of that personality cancels it all out. The “I’m better than you” aura? Too much. The endless dramatics? Exhausting. Also, have you ever tried to have a conversation with him? It’s like talking to a glass of vintage wine that keeps reminding you it’s better than you’ll ever be. Minus points for pettiness. (Also: I’m petty.)
FIFTH PLACE: Legolas Listen, he’s pretty. We know. But he’s the obvious choice. Poster boy energy. The hair? Immaculate. The skin? Glowing. But too polished. Too perfect. Has he ever suffered? Has he ever known the pain of paperwork? No. And thus, no depth. Minus points for effortless ease. Some of us work very hard to look this tired and elegant.
FOURTH PLACE: Gil-galad Ah, the High King. Golden aura. Regal presence. The drama. He’s like a sunrise that also judges you for your life choices. The crown helps—everything looks better with a bit of sparkle. However, too perfect. His flawlessness makes one suspicious. What are you hiding, Your Majesty? Minus points for suspiciously perfect posture.
THIRD PLACE: Haldir Now, here’s a contender. Stoic. Sharp. Has “will shame you in three languages” energy. Mysterious, aloof—he makes the trees themselves blush. I respect that. Bonus points for being just rude enough to be intriguing but polite enough that you question if you imagined it. Also, cape game? Impeccable.
SECOND PLACE: Elrond The brooding single-father aesthetic? Unmatched. The man walks into a room with the weariness of someone who’s raised twins, run a kingdom, survived multiple wars, and still looks like he models for ancient coinage. The exhausted “I can’t believe I have to deal with this” expression? Iconic. Plus, he has the rare gift of looking incredible while delivering devastating lectures. Bonus points for deep sighs and “I am surrounded by fools” energy.
FIRST PLACE: Eredin (Surprised? You shouldn’t be.) The sweetness. The gentle eyes. The cocoa addiction. The fact that he wields a sweet tooth like a weapon and still manages to look like he stepped out of a romantic ballad. Eredin has that approachable attractiveness. The kind that makes you believe he’d share pastries with you at dawn and also gently remind you to tag sensitive content. Softness with bite. A balance of “I bake” and “I will ruin you with polite concern.” Perfection.
✨ And there you have it. The ultimate ranking. Unbiased. Objective. Canon.
If you disagree? Feel free to file a complaint with Rivendell’s administration. I will personally place it at the bottom of Erestor’s “To-Ignore” pile.
—A final, utterly unbiased note:
I assure you, I am being completely objective in my judgment. Entirely impartial. My personal opinions have no bearing on these results.
It is simply not my fault that my poor, overworked scribe assistant, Eredin, possesses a level of charm and romantic competence (as the mortals say, “rizz”) that could put entire royal bloodlines to shame.
Truly, a tragedy for the rest of us.
—With the utmost grace, impeccable taste, and absolutely no bitterness whatsoever,
✨🌿 Lindir, Keeper of Schedules, Sighs, and Unwanted Chaos 🌿✨
7 notes · View notes
codingquill · 2 years ago
Text
SQL Fundamentals #1: SQL Data Definition
Last year in college , I had the opportunity to dive deep into SQL. The course was made even more exciting by an amazing instructor . Fast forward to today, and I regularly use SQL in my backend development work with PHP. Today, I felt the need to refresh my SQL knowledge a bit, and that's why I've put together three posts aimed at helping beginners grasp the fundamentals of SQL.
Understanding Relational Databases
Let's Begin with the Basics: What Is a Database?
Simply put, a database is like a digital warehouse where you store large amounts of data. When you work on projects that involve data, you need a place to keep that data organized and accessible, and that's where databases come into play.
Exploring Different Types of Databases
When it comes to databases, there are two primary types to consider: relational and non-relational.
Relational Databases: Structured Like Tables
Think of a relational database as a collection of neatly organized tables, somewhat like rows and columns in an Excel spreadsheet. Each table represents a specific type of information, and these tables are interconnected through shared attributes. It's similar to a well-organized library catalog where you can find books by author, title, or genre.
Key Points:
Tables with rows and columns.
Data is neatly structured, much like a library catalog.
You use a structured query language (SQL) to interact with it.
Ideal for handling structured data with complex relationships.
Non-Relational Databases: Flexibility in Containers
Now, imagine a non-relational database as a collection of flexible containers, more like bins or boxes. Each container holds data, but they don't have to adhere to a fixed format. It's like managing a diverse collection of items in various boxes without strict rules. This flexibility is incredibly useful when dealing with unstructured or rapidly changing data, like social media posts or sensor readings.
Key Points:
Data can be stored in diverse formats.
There's no rigid structure; adaptability is the name of the game.
Non-relational databases (often called NoSQL databases) are commonly used.
Ideal for handling unstructured or dynamic data.
Now, Let's Dive into SQL:
Tumblr media
SQL is a :
Data Definition language ( what todays post is all about )
Data Manipulation language
Data Query language
Task: Building and Interacting with a Bookstore Database
Setting Up the Database
Our first step in creating a bookstore database is to establish it. You can achieve this with a straightforward SQL command:
CREATE DATABASE bookstoreDB;
SQL Data Definition
As the name suggests, this step is all about defining your tables. By the end of this phase, your database and the tables within it are created and ready for action.
Tumblr media
1 - Introducing the 'Books' Table
A bookstore is all about its collection of books, so our 'bookstoreDB' needs a place to store them. We'll call this place the 'books' table. Here's how you create it:
CREATE TABLE books ( -- Don't worry, we'll fill this in soon! );
Now, each book has its own set of unique details, including titles, authors, genres, publication years, and prices. These details will become the columns in our 'books' table, ensuring that every book can be fully described.
Now that we have the plan, let's create our 'books' table with all these attributes:
CREATE TABLE books ( title VARCHAR(40), author VARCHAR(40), genre VARCHAR(40), publishedYear DATE, price INT(10) );
With this structure in place, our bookstore database is ready to house a world of books.
2 - Making Changes to the Table
Sometimes, you might need to modify a table you've created in your database. Whether it's correcting an error during table creation, renaming the table, or adding/removing columns, these changes are made using the 'ALTER TABLE' command.
For instance, if you want to rename your 'books' table:
ALTER TABLE books RENAME TO books_table;
If you want to add a new column:
ALTER TABLE books ADD COLUMN description VARCHAR(100);
Or, if you need to delete a column:
ALTER TABLE books DROP COLUMN title;
3 - Dropping the Table
Finally, if you ever want to remove a table you've created in your database, you can do so using the 'DROP TABLE' command:
DROP TABLE books;
To keep this post concise, our next post will delve into the second step, which involves data manipulation. Once our bookstore database is up and running with its tables, we'll explore how to modify and enrich it with new information and data. Stay tuned ...
Part2
112 notes · View notes
this-week-in-rust · 2 months ago
Text
This Week in Rust 595
Hello and welcome to another issue of This Week in Rust! Rust is a programming language empowering everyone to build reliable and efficient software. This is a weekly summary of its progress and community. Want something mentioned? Tag us at @thisweekinrust.bsky.social on Bluesky or @ThisWeekinRust on mastodon.social, or send us a pull request. Want to get involved? We love contributions.
This Week in Rust is openly developed on GitHub and archives can be viewed at this-week-in-rust.org. If you find any errors in this week's issue, please submit a PR.
Want TWIR in your inbox? Subscribe here.
Updates from Rust Community
Official
March Project Goals Update
Newsletters
The Embedded Rustacean Issue #43
Project/Tooling Updates
Shadertoys ported to Rust GPU
Meilisearch 1.14 - composite embedders, embedding cache, granular filterable attributes, and batch document retrieval by ID
rust-query 0.4: structural types and other new features
Observations/Thoughts
Rebuilding Prime Video UI with Rust and WebAssembly
ALP Rust is faster than C++
what if the poison were rust?
A surprising enum size optimization in the Rust compiler
Two Years of Rust
An ECS lite architecture
A 2025 Survey of Rust GUI Libraries
BTrees, Inverted Indices, and a Model for Full Text Search
Cutting Down Rust Compile Times From 30 to 2 Minutes With One Thousand Crates
SIMD in zlib-rs (part 1): Autovectorization and target features
Avoiding memory fragmentation in Rust with jemalloc
[video] Bevy Basics: Who Observes the Observer
Rust Walkthroughs
Rust Type System Deep Dive From GATs to Type Erasure
Async from scratch 1: What's in a Future, anyway? | natkr's ramblings
Async from scratch 2: Wake me maybe | natkr's ramblings
Building a search engine from scratch, in Rust: part 4
Pretty State Machine Patterns in Rust
[video] Build with Naz : Declarative macros in Rust
Miscellaneous
March 2025 Jobs Report
Rust resources
Crate of the Week
This week's crate is wgpu, a cross-platform graphics and compute library based on WebGPU.
Despite a lack of suggestions, llogiq is pleased with his choice.
Please submit your suggestions and votes for next week!
Calls for Testing
An important step for RFC implementation is for people to experiment with the implementation and give feedback, especially before stabilization.
If you are a feature implementer and would like your RFC to appear in this list, add a call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
No calls for testing were issued this week by Rust, Rust language RFCs or Rustup.*
Let us know if you would like your feature to be tracked as a part of this list.
RFCs
Rust
Rustup
If you are a feature implementer and would like your RFC to appear on the above list, add the new call-for-testing label to your RFC along with a comment providing testing instructions and/or guidance on which aspect(s) of the feature need testing.
Call for Participation; projects and speakers
CFP - Projects
Always wanted to contribute to open-source projects but did not know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!
Some of these tasks may also have mentors available, visit the task page for more information.
rama - add serve command to rama-cli
rama - add support for include_dir for to ServeDir and related
rama - add curl module to rama-http-types
If you are a Rust project owner and are looking for contributors, please submit tasks here or through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!
CFP - Events
Are you a new or experienced speaker looking for a place to share something cool? This section highlights events that are being planned and are accepting submissions to join their event as a speaker.
If you are an event organizer hoping to expand the reach of your event, please submit a link to the website through a PR to TWiR or by reaching out on X (formerly Twitter) or Mastodon!
Updates from the Rust Project
480 pull requests were merged in the last week
Compiler
detect and provide suggestion for &raw EXPR
don't suggest the use of impl Trait in closure parameter
make the compiler suggest actual paths instead of visible paths if the visible paths are through any doc hidden path
tell LLVM about impossible niche tags
remove Nonterminal and TokenKind::Interpolated
re-use Sized fast-path
Library
add core::intrinsics::simd::{simd_extract_dyn, simd_insert_dyn}
initial UnsafePinned implementation (Part 1: Libs)
polymorphize array::IntoIter's iterator impl
speed up String::push and String::insert
std: add Output::exit_ok
Cargo
added symlink resolution for workspace-path-hash
improved error message when build-dir template var is invalid
Rustdoc
search: add unbox flag to Result aliases
enable Markdown extensions when looking for doctests
Clippy
arbitrary_source_item_ordering should ignore test modules
implicit_return: better handling of asynchronous code
accept self.cmp(other).into() as canonical PartialOrd impl
add manual_abs_diff lint
consecutive returns dont decrease cognitive Complexity level anymore
consider nested lifetimes in mut_from_ref
correctly handle bracketed type in default_constructed_unit_struct
deprecate match_on_vec_items lint
do not propose to auto-derive Clone in presence of unsafe fields
fix: iter_cloned_collect false positive with custom From/IntoIterator impl
fix: map_entry: don't emit lint before checks have been performed
fix: redundant_clone false positive in overlapping lifetime
various fixes for manual_is_power_of_two
Rust-Analyzer
ast: return correct types for make::expr_* methods
add children modules feature
add normalizeDriveLetter
distribute x64 and aarch64 Linux builds with PGO optimizations
fix dyn compatibility code bypassing callable_item_signature query
fix a small bug with catastrophic effects
fix an incorrect ExpressionStore that was passed
prevent panics when there is a cyclic dependency between closures
shadow type by module
ignore errors from rustfmt which may trigger error notification
port closure inference from rustc
Rust Compiler Performance Triage
Relatively small changes this week, nothing terribly impactful (positive or negative).
Triage done by @simulacrum. Revision range: e643f59f..15f58c46
1 Regressions, 3 Improvements, 3 Mixed; 2 of them in rollups 35 artifact comparisons made in total
Full report here
Approved RFCs
Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:
No RFCs were approved this week.
Final Comment Period
Every week, the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now.
Tracking Issues & PRs
Rust
Split elided_lifetime_in_paths into tied and untied
check types of const param defaults
Stabilize flags for doctest cross compilation
Do not remove trivial SwitchInt in analysis MIR
Implement a lint for implicit autoref of raw pointer dereference - take 2
Implement Default for raw pointers
make abi_unsupported_vector_types a hard error
Stabilize let chains in the 2024 edition
Make closure capturing have consistent and correct behaviour around patterns
Stabilize the cell_update feature
Other Areas
*No Items entered Final Comment Period this week for Rust RFCs, Cargo, Language Team, Language Reference or Unsafe Code Guidelines.
Let us know if you would like your PRs, Tracking Issues or RFCs to be tracked as a part of this list.
New and Updated RFCs
No New or Updated RFCs were created this week.
Upcoming Events
Rusty Events between 2025-04-16 - 2025-05-14 🦀
Virtual
2025-04-16 | Virtual (Vancouver, BC, CA) | Vancouver Rust
Rust Study/Hack/Hang-out
2025-04-17 | Virtual and In-Person (Redmond, WA, US) | Seattle Rust User Group
April, 2025 SRUG (Seattle Rust User Group) Meetup
2025-04-22 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
Fourth Tuesday
2025-04-23 | Virtual (Cardiff, UK) | Rust and C++ Cardiff
Beyond embedded - OS development in Rust
2025-04-24 | Virtual (Berlin, DE) | Rust Berlin
Rust Hack and Learn
2025-04-24 | Virtual (Charlottesville, VA, US) | Charlottesville Rust Meetup
Part 2: Quantum Computers Can’t Rust-Proof This!"
2025-05-03 | Virtual (Kampala, UG) | Rust Circle Meetup
Rust Circle Meetup
2025-05-05 | Virtual (Tel Aviv-Yafo, IL) | Rust 🦀 TLV
Tauri: Cross-Platform desktop applications with Rust and web technologies
2025-05-07 | Virtual (Indianapolis, IN, US) | Indy Rust
Indy.rs - with Social Distancing
2025-05-08 | Virtual (Berlin, DE) | Rust Berlin
Rust Hack and Learn
2025-05-13 | Virtual (Dallas, TX, US) | Dallas Rust User Meetup
Second Tuesday
Asia
2025-04-22 | Tel Aviv-Yafo, IL | Rust 🦀 TLV
In person Rust April 2025 at Braavos in Tel Aviv in collaboration with StarkWare
Europe
2025-04-19 | Istanbul, TR | Türkiye Rust Community
Rust Konf Türkiye
2025-04-23 | London, UK | London Rust Project Group
Fusing Python with Rust using raw C bindings
2025-04-24 | Aarhus, DK | Rust Aarhus
Talk Night at MFT Energy
2025-04-24 | Edinburgh, UK | Rust and Friends
Rust and Friends (evening pub)
2025-04-24 | Manchester, UK | Rust Manchester
Rust Manchester April Code Night
2025-04-25 | Edinburgh, UK | Rust and Friends
Rust and Friends (daytime coffee)
2025-04-26 | Stockholm, SE | Stockholm Rust
Ferris' Fika Forum #11
2025-04-29 | London, UK | Rust London User Group
LDN Talks April 2025 Community Showcase
2025-04-29 | Paris, FR | Rust Paris
Rust meetup #76
2025-04-30 | Frankfurt, DE | Rust Rhein-Main
Kubernetes Operator in Rust
2025-05-01 | Nürnberg, DE | Rust Nuremberg
Hackers Hike 0x0
2025-05-06 - 2025-05-07 | Paris, FR | WebAssembly and Rust Meetup
GOSIM AI Paris 2025
2025-05-06 | Paris, FR | WebAssembly and Rust Meetup (Wasm Empowering AI)
GOSIM AI Paris 2025 (Discount available)
2025-05-07 | Madrid, ES | MadRust
VII Lenguajes, VII Perspectivas, I Problema
2025-05-07 | Oxford, UK | Oxford Rust Meetup Group
Oxford Rust and C++ social
2025-05-08 | Gdansk, PL | Rust Gdansk
Rust Gdansk Meetup #8
2025-05-08 | London, UK | London Rust Project Group
Adopting Rust (Hosted by Lloyds bank)
2025-05-13 | Amsterdam, NL | RustNL
RustWeek 2025 announcement
2025-05-13 - 2025-05-17 | Utrecht, NL | Rust NL
RustWeek 2025
2025-05-14 | Reading, UK | Reading Rust Workshop
Reading Rust Meetup
North America
2025-04-17 | Mountain View, CA, US | Hacker Dojo
RUST MEETUP at HACKER DOJO
2025-04-17 | Nashville, TN, US | Music City Rust Developers
Using Rust For Web Series 1 : Why HTMX Is Bad
2025-04-17 | Redmond, WA, US | Seattle Rust User Group
April, 2025 SRUG (Seattle Rust User Group) Meetup
2025-04-22 | Detroit, MI, US | Detroit Rust
Rust Community Meet and Conference Report - Ann Arbor
2025-04-23 | Austin, TX, US | Rust ATX
Rust Lunch - Fareground
2025-04-23 | Austin, TX, US | Rust ATX
Rust Lunch - Fareground 2025-04-23 | Spokane, WA, US | Spokane Rust
Community Show & Tell at Fuel Coworking
2025-04-24 | Atlanta, GA, US | Rust Atlanta
3rd 3RD TIME OMG YES!
2025-04-25 | Boston, MA, US | Boston Rust Meetup
Ball Square Rust Lunch, Apr 25
2025-05-01 | Saint Louis, MO, US | STL Rust
SIUE Capstone Project reflections on Rust
2025-05-03 | Boston, MA, US | Boston Rust Meetup
Boston Common Rust Lunch, May 3
2025-05-08 | México City, MX | Rust MX
Calculando con el compilador: Compiler time vs Run time
2025-05-08 | Portland, OR, US | PDXRust
Apache DataFusion: A Fast, Extensible, Modular Analytic Query Engine in Rust
2025-05-11 | Boston, MA, US | Boston Rust Meetup
Porter Square Rust Lunch, May 11
Oceania
2025-04-22 | Barton, AC, AU | Canberra Rust User Group
April Meetup
If you are running a Rust event please add it to the calendar to get it mentioned here. Please remember to add a link to the event too. Email the Rust Community Team for access.
Jobs
Please see the latest Who's Hiring thread on r/rust
Quote of the Week
IEEE 754 floating point, proudly providing counterexamples since 1985!
– Johannes Dahlström on rust-internals
Thanks to Ralf Jung for the suggestion!
Please submit quotes and vote for next week!
This Week in Rust is edited by: nellshamrell, llogiq, cdmistman, ericseppanen, extrawurst, U007D, joelmarcey, mariannegoldin, bennyvasquez, bdillo
Email list hosting is sponsored by The Rust Foundation
Discuss on r/rust
5 notes · View notes
xaltius · 3 months ago
Text
Unlocking the Power of Data: Essential Skills to Become a Data Scientist
Tumblr media
In today's data-driven world, the demand for skilled data scientists is skyrocketing. These professionals are the key to transforming raw information into actionable insights, driving innovation and shaping business strategies. But what exactly does it take to become a data scientist? It's a multidisciplinary field, requiring a unique blend of technical prowess and analytical thinking. Let's break down the essential skills you'll need to embark on this exciting career path.
1. Strong Mathematical and Statistical Foundation:
At the heart of data science lies a deep understanding of mathematics and statistics. You'll need to grasp concepts like:
Linear Algebra and Calculus: Essential for understanding machine learning algorithms and optimizing models.
Probability and Statistics: Crucial for data analysis, hypothesis testing, and drawing meaningful conclusions from data.
2. Programming Proficiency (Python and/or R):
Data scientists are fluent in at least one, if not both, of the dominant programming languages in the field:
Python: Known for its readability and extensive libraries like Pandas, NumPy, Scikit-learn, and TensorFlow, making it ideal for data manipulation, analysis, and machine learning.
R: Specifically designed for statistical computing and graphics, R offers a rich ecosystem of packages for statistical modeling and visualization.
3. Data Wrangling and Preprocessing Skills:
Raw data is rarely clean and ready for analysis. A significant portion of a data scientist's time is spent on:
Data Cleaning: Handling missing values, outliers, and inconsistencies.
Data Transformation: Reshaping, merging, and aggregating data.
Feature Engineering: Creating new features from existing data to improve model performance.
4. Expertise in Databases and SQL:
Data often resides in databases. Proficiency in SQL (Structured Query Language) is essential for:
Extracting Data: Querying and retrieving data from various database systems.
Data Manipulation: Filtering, joining, and aggregating data within databases.
5. Machine Learning Mastery:
Machine learning is a core component of data science, enabling you to build models that learn from data and make predictions or classifications. Key areas include:
Supervised Learning: Regression, classification algorithms.
Unsupervised Learning: Clustering, dimensionality reduction.
Model Selection and Evaluation: Choosing the right algorithms and assessing their performance.
6. Data Visualization and Communication Skills:
Being able to effectively communicate your findings is just as important as the analysis itself. You'll need to:
Visualize Data: Create compelling charts and graphs to explore patterns and insights using libraries like Matplotlib, Seaborn (Python), or ggplot2 (R).
Tell Data Stories: Present your findings in a clear and concise manner that resonates with both technical and non-technical audiences.
7. Critical Thinking and Problem-Solving Abilities:
Data scientists are essentially problem solvers. You need to be able to:
Define Business Problems: Translate business challenges into data science questions.
Develop Analytical Frameworks: Structure your approach to solve complex problems.
Interpret Results: Draw meaningful conclusions and translate them into actionable recommendations.
8. Domain Knowledge (Optional but Highly Beneficial):
Having expertise in the specific industry or domain you're working in can give you a significant advantage. It helps you understand the context of the data and formulate more relevant questions.
9. Curiosity and a Growth Mindset:
The field of data science is constantly evolving. A genuine curiosity and a willingness to learn new technologies and techniques are crucial for long-term success.
10. Strong Communication and Collaboration Skills:
Data scientists often work in teams and need to collaborate effectively with engineers, business stakeholders, and other experts.
Kickstart Your Data Science Journey with Xaltius Academy's Data Science and AI Program:
Acquiring these skills can seem like a daunting task, but structured learning programs can provide a clear and effective path. Xaltius Academy's Data Science and AI Program is designed to equip you with the essential knowledge and practical experience to become a successful data scientist.
Key benefits of the program:
Comprehensive Curriculum: Covers all the core skills mentioned above, from foundational mathematics to advanced machine learning techniques.
Hands-on Projects: Provides practical experience working with real-world datasets and building a strong portfolio.
Expert Instructors: Learn from industry professionals with years of experience in data science and AI.
Career Support: Offers guidance and resources to help you launch your data science career.
Becoming a data scientist is a rewarding journey that blends technical expertise with analytical thinking. By focusing on developing these key skills and leveraging resources like Xaltius Academy's program, you can position yourself for a successful and impactful career in this in-demand field. The power of data is waiting to be unlocked – are you ready to take the challenge?
3 notes · View notes
pranaywahi · 4 days ago
Text
From Keywords to Conversations: How Search Has Evolved
Tumblr media
Fast forward to 2025, and search is no longer about keyword matching. It’s about understanding human conversations, context, and intent. Google doesn’t just crawl web pages anymore ; it thinks, it interprets, and it even responds. What we’re seeing is the shift from keyword based SEO to conversation driven search.
The Keyword Era: When Simplicity Was Enough
Back in the 2000s and early 2010s, SEO was largely reliable . If you wanted to rank for “best pizza in Delhi,” you just needed to include that phrase , in your title, your heading, and your body content — a few too many times. The system worked because search engines weren’t smart enough to question the user’s true intent. They only saw the literal text.
But the problem with keyword stuffing and mechanical optimization was that it never served the user. It served the algorithm. People landed on pages that didn’t quite answer their questions, didn’t speak their language, and didn’t understand what they really meant.
From Phrases to Intent: The Rise of Smarter Search
As AI became more integrated into search engines, the game changed. Google’s updates ; from Multitask Unified Mode and now SGE (Search Generative Experience) — have all been steps toward one goal: understanding what users are trying to say, not just what they’re typing.
That’s why, in 2025, your content needs to think like your audience. Instead of matching keywords, you need to mirror conversations. Your blogs, product pages, FAQs , all of them should sound like they’re part of a helpful chat. Because that’s how AI is processing them.
Platforms like SeoBix have quietly adapted to this shift. Rather than offering outdated keyword tools, they provide deep insights into how people actually phrase questions, how search engines interpret them, and how to build content that fits naturally into those evolving patterns.
Voice Search and AI Assistants Changed the Tone
Another major catalyst in this shift has been the rise of voice search and AI-driven virtual assistants.
Search engines had to evolve, and so did SEO strategies. Now, content that ranks is the content that converses. It reads naturally, anticipates follow-up questions, and creates a seamless flow from one idea to the next.
With SeoBix, creators don’t need to guess what that flow should be. The platform analyzes conversation trends, user behavior, and intent-based search journeys to help you craft content that’s not just findable, but meaningful.
AI Overviews and Zero-Click Results: New Rules, New Reality
In today’s search results, users often get what they need before they click. AI Overviews, answer boxes, and featured snippets now dominate the top of the page. That means your content doesn’t just need to rank — it needs to be concise, direct, and instantly valuable.
To show up in these spots, you have to structure your content like an expert yet make it feel like a casual explanation. That’s not always easy, especially when you’re dealing with complex topics.
This is where platforms like SeoBix prove their worth. They help structure your messaging for AI clarity without losing your brand’s voice or readability.
Search Today Is a Dialogue, Not a Directory
Search is no longer a static query that pulls up a list of links. It’s a dynamic dialogue , a back-and-forth between human curiosity and machine understanding. And the businesses that thrive in this environment are the ones that don’t just talk at users. They listen. They respond. They adapt.
SEO in 2025 isn’t dead. It’s just smarter, more human, and deeply integrated with the ways people speak, not just how they search. And if you’re using tools built for the old web, you’ll miss out on the new one.
Conclusion
If you want your brand to stay relevant, your content must go beyond keywords. It must feel like it’s part of the conversation already happening in the user’s mind.
With platforms like SeoBix helping you bridge the gap between AI understanding and human intention, you’re not just optimizing for search engines , you’re creating content that genuinely connects.
Because in the end, great SEO isn’t about chasing algorithms. It’s about joining the conversation.
2 notes · View notes
sarkariresultdude · 8 days ago
Text
Combined Graduate Level Exam: Eligibility Rules You Must Know
 The Combined Graduate Level (CGL) Examination, carried out by way of the Staff Selection Commission (SSC), is one of the maximum prestigious and sought-after government recruitment checks in India. It opens the doorways to a wide range of Group B and Group C posts in various ministries, departments, and subordinate offices below the Government of India. Each 12 months, lakhs of aspirants from across the u . S . A . Compete for a limited number of vacancies, making it one of the most competitive tests inside the nation.
Combined graduate level examination eligibility
Tumblr media
1. Objective of the SSC CGL Exam
The examination guarantees a transparent and benefit-based totally selection procedure for jobs that provide stability, security, and the status of operating for the government.
The posts consist of roles like:
Assistant Section Officer (ASO)
Inspector of Income Tax
Assistant Audit Officer
Central Excise Inspector
Statistical Investigator
Auditor
Junior Accountant
Divisional Accountant, and plenty of more.
2. Eligibility Criteria
To apply for SSC CGL, applicants have to fulfill the subsequent primary eligibility criteria:
a) Educational Qualification
A bachelor’s diploma in any area from a diagnosed university is the minimal requirement.
For positive posts, unique qualifications can be wanted (e.G., Statistics degree for Statistical Investigator).
B) Age Limit
Age varies depending at the publish, generally between 18 to 32 years.
Age rest is provided to candidates belonging to reserved classes (SC/ST/OBC/PwD).
C) Nationality
Candidates need to be Indian residents or belong to other eligible categories as described by way of the SSC.
Three. Structure of the Exam
The SSC CGL exam is conducted in four tiers:
Tier-I: Preliminary Examination
Objective kind, online
Total Marks: 200
Time: 60 mins
Tier-II: Main Examination
Objective type, on-line
Papers encompass:
Paper I: Quantitative Abilities
Paper II: English Language and Comprehension
Paper III: Statistics (for relevant posts)
Paper IV: General Studies (Finance & Economics, for AAO put up)
Negative marking applies
Tier-III: Descriptive Paper
Pen and paper-based
Essay/Letter/Precis writing
Marks: a hundred
Language: English or Hindi
Duration: 60 mins
Conducted for unique posts
4. Syllabus Overview
a) General Intelligence & Reasoning
Analogies, type, coding-deciphering, puzzle solving, syllogisms, and sample reputation
b) Quantitative Aptitude
Number gadget, percentage, mensuration, earnings & loss, ratio and percentage, time & paintings, algebra, geometry, trigonometry
c) English Comprehension
Grammar, vocabulary, comprehension, sentence correction, cloze assessments
d) General Awareness
Current affairs, history, geography, polity, economics, standard technological know-how
five. Preparation Strategy
Preparing for the SSC CGL exam requires consistent effort, a strategic take a look at plan, and smart time management.
A) Understand the Exam Pattern
Know the weightage of every section
Practice through previous year query papers
b) Focus on Basics
Strengthen your fundamentals in math and English
Make short notes for revision of GK and modern affairs
c) Regular Practice
Attempt day by day mock exams
Improve pace and accuracy
d) Stay Updated
Read newspapers, observe monthly contemporary affairs magazines
Use apps and on-line systems for daily quizzes
6. Job Roles and Perks
SSC CGL-decided on candidates get located in prestigious positions with the Government of India. Some of the blessings consist of:
Attractive Salary Packages: Ranging from Rs. 35,000 to Rs. 70,000 depending on the submit and place.
Job Security and Pension: Government jobs offer unrivaled task safety and post-retirement advantages.
Growth Opportunities: Regular promotions and opportunities to take departmental checks.
7. Challenges Faced via Aspirants
Despite the appeal of the CGL examination, aspirants face several demanding situations:
High Competition: With over 20 lakh applicants annually, opposition is fierce.
Changing Exam Patterns: The SSC on occasion modifies patterns and syllabus, requiring adaptability.
Limited Seats: With just a few thousand vacancies, handiest the maximum organized applicants prevail.
Preparation Time: It requires long-time period steady guidance, often for over a 12 months.
Eight. Recent Changes and Reforms
The SSC has been working to make the CGL examination extra obvious and efficient:
Online Application and Computer-Based Tests: To reduce mistakes and accelerate processing.
Normalization of Scores: Ensures fairness throughout extraordinary shifts.
Single Year Calendar: SSC now releases an annual calendar for all assessments, allowing better planning.
9. Role of Coaching and Self-Study
Many aspirants be part of training institutes to prepare for the CGL exam, especially for help in math and reasoning. However, with the rise of virtual learning platforms, self-study with online resources, YouTube tutorials, and ridicule test collection has come to be a popular and effective technique for many.
2 notes · View notes
frank-olivier · 8 months ago
Text
Tumblr media
Bayesian Active Exploration: A New Frontier in Artificial Intelligence
The field of artificial intelligence has seen tremendous growth and advancements in recent years, with various techniques and paradigms emerging to tackle complex problems in the field of machine learning, computer vision, and natural language processing. Two of these concepts that have attracted a lot of attention are active inference and Bayesian mechanics. Although both techniques have been researched separately, their synergy has the potential to revolutionize AI by creating more efficient, accurate, and effective systems.
Traditional machine learning algorithms rely on a passive approach, where the system receives data and updates its parameters without actively influencing the data collection process. However, this approach can have limitations, especially in complex and dynamic environments. Active interference, on the other hand, allows AI systems to take an active role in selecting the most informative data points or actions to collect more relevant information. In this way, active inference allows systems to adapt to changing environments, reducing the need for labeled data and improving the efficiency of learning and decision-making.
One of the first milestones in active inference was the development of the "query by committee" algorithm by Freund et al. in 1997. This algorithm used a committee of models to determine the most meaningful data points to capture, laying the foundation for future active learning techniques. Another important milestone was the introduction of "uncertainty sampling" by Lewis and Gale in 1994, which selected data points with the highest uncertainty or ambiguity to capture more information.
Bayesian mechanics, on the other hand, provides a probabilistic framework for reasoning and decision-making under uncertainty. By modeling complex systems using probability distributions, Bayesian mechanics enables AI systems to quantify uncertainty and ambiguity, thereby making more informed decisions when faced with incomplete or noisy data. Bayesian inference, the process of updating the prior distribution using new data, is a powerful tool for learning and decision-making.
One of the first milestones in Bayesian mechanics was the development of Bayes' theorem by Thomas Bayes in 1763. This theorem provided a mathematical framework for updating the probability of a hypothesis based on new evidence. Another important milestone was the introduction of Bayesian networks by Pearl in 1988, which provided a structured approach to modeling complex systems using probability distributions.
While active inference and Bayesian mechanics each have their strengths, combining them has the potential to create a new generation of AI systems that can actively collect informative data and update their probabilistic models to make more informed decisions. The combination of active inference and Bayesian mechanics has numerous applications in AI, including robotics, computer vision, and natural language processing. In robotics, for example, active inference can be used to actively explore the environment, collect more informative data, and improve navigation and decision-making. In computer vision, active inference can be used to actively select the most informative images or viewpoints, improving object recognition or scene understanding.
Timeline:
1763: Bayes' theorem
1988: Bayesian networks
1994: Uncertainty Sampling
1997: Query by Committee algorithm
2017: Deep Bayesian Active Learning
2019: Bayesian Active Exploration
2020: Active Bayesian Inference for Deep Learning
2020: Bayesian Active Learning for Computer Vision
The synergy of active inference and Bayesian mechanics is expected to play a crucial role in shaping the next generation of AI systems. Some possible future developments in this area include:
- Combining active inference and Bayesian mechanics with other AI techniques, such as reinforcement learning and transfer learning, to create more powerful and flexible AI systems.
- Applying the synergy of active inference and Bayesian mechanics to new areas, such as healthcare, finance, and education, to improve decision-making and outcomes.
- Developing new algorithms and techniques that integrate active inference and Bayesian mechanics, such as Bayesian active learning for deep learning and Bayesian active exploration for robotics.
Dr. Sanjeev Namjosh: The Hidden Math Behind All Living Systems - On Active Inference, the Free Energy Principle, and Bayesian Mechanics (Machine Learning Street Talk, October 2024)
youtube
Saturday, October 26, 2024
6 notes · View notes
gywo · 19 days ago
Text
Did You Miss It?
If you haven’t been around Dreamwidth recently, you may have missed a great post at GYWO!  All links are member-locked, so if you’re not signed up for the challenge, you won’t be able to access them.
Challenges
Roll the Dice Challenge: Roll a word count goal for a 2-day challenge
The Language of Flowers Challenge: Get assigned a certain flower and its meaning
Permission to Suck: Sucky first drafts are a go!
Discussions
Goal Achievement Through Steady Progress: How can making small steps every day (or at least semi-regularly) get you to your goals quickly?
How Do You Level Up?: Figuring out what to focus on and how to develop are only the first steps to improving as a writer. Let's chat about what can help in your writer growth
Picking Character POV: Things to consider when choosing the POV character for your story or scene
Searching High And Low For A Good Home or Where Can I Even Put This Darn Story?: Do you have a short story you want to get published? Here's a list of places where you can find markets open to submissions
What Are You Working On?: Fill out the question form and leave a comment to let everyone know about your current work in progress
Mindful Participation in Fandom: Whether you're a fandom old or a newbie, we've got you covered with ways to interact with fandom and keep your cool
The Duck Pond: Are you stuck on a concept? Not sure where to go with a character? Just need help making a decision? Talk to a rubber duck and find a solution!
Meshing With An Editor - What's a Good Fit?: How do you go about finding the right editor for your writing?
Bold Kitten Publishing: a Bold Kitten Guide: You, too, can publish like a Bold Kitten—learn how to gather your peeps, and repeat, repeat, repeat, as well as why bold publishing works
Writer Kindness: a Bold 🐈 Kitten's Guide: We writers need to remember that kindness isn’t always just for others—it’s also for ourselves! Here are some ways we can show kindness to ourselves as writers
Tips for Writing with Romancing the Beat Structure: Advice on planning your romance story using the Romancing the Beat structure and making the beats your own
Support
Chronic Illness Support Post: Writing when you’re not in the mood
Pep Talk: If you're behind in your pledge, you may be thinking about quitting. The mods are here to tell you (with data-backed evidence) you are not alone! Continuing to try is a success all by itself
Submission Soirée
Submission Soirée: Prepare a manuscript for submission with support from the community
Submission Soirée Progress Check 1: Week 2 Focus: Query & Cover Letters
Submission Soirée Progress Check 2: Week 3 Focus: AO3, Wattpad, and Self-Publishing Tips
Submission Soirée Wrap Up: Tell us about your accomplishments, final status, and your wrap-up thoughts
GYWO Yahtzee
Yahtzee Leaderboard & Digest: Fills April 29–May 5
Yahtzee Leaderboard & Digest: Fills May 6–May 26
Monthly Check-In
May Check-In: Don't forget to check in for May! Check-in closes June 5th
2 notes · View notes
aiseoexperteurope · 23 days ago
Text
WHAT IS VERTEX AI SEARCH
Vertex AI Search: A Comprehensive Analysis
1. Executive Summary
Vertex AI Search emerges as a pivotal component of Google Cloud's artificial intelligence portfolio, offering enterprises the capability to deploy search experiences with the quality and sophistication characteristic of Google's own search technologies. This service is fundamentally designed to handle diverse data types, both structured and unstructured, and is increasingly distinguished by its deep integration with generative AI, most notably through its out-of-the-box Retrieval Augmented Generation (RAG) functionalities. This RAG capability is central to its value proposition, enabling organizations to ground large language model (LLM) responses in their proprietary data, thereby enhancing accuracy, reliability, and contextual relevance while mitigating the risk of generating factually incorrect information.
The platform's strengths are manifold, stemming from Google's decades of expertise in semantic search and natural language processing. Vertex AI Search simplifies the traditionally complex workflows associated with building RAG systems, including data ingestion, processing, embedding, and indexing. It offers specialized solutions tailored for key industries such as retail, media, and healthcare, addressing their unique vernacular and operational needs. Furthermore, its integration within the broader Vertex AI ecosystem, including access to advanced models like Gemini, positions it as a comprehensive solution for building sophisticated AI-driven applications.
However, the adoption of Vertex AI Search is not without its considerations. The pricing model, while granular and offering a "pay-as-you-go" approach, can be complex, necessitating careful cost modeling, particularly for features like generative AI and always-on components such as Vector Search index serving. User experiences and technical documentation also point to potential implementation hurdles for highly specific or advanced use cases, including complexities in IAM permission management and evolving query behaviors with platform updates. The rapid pace of innovation, while a strength, also requires organizations to remain adaptable.
Ultimately, Vertex AI Search represents a strategic asset for organizations aiming to unlock the value of their enterprise data through advanced search and AI. It provides a pathway to not only enhance information retrieval but also to build a new generation of AI-powered applications that are deeply informed by and integrated with an organization's unique knowledge base. Its continued evolution suggests a trajectory towards becoming a core reasoning engine for enterprise AI, extending beyond search to power more autonomous and intelligent systems.
2. Introduction to Vertex AI Search
Vertex AI Search is establishing itself as a significant offering within Google Cloud's AI capabilities, designed to transform how enterprises access and utilize their information. Its strategic placement within the Google Cloud ecosystem and its core value proposition address critical needs in the evolving landscape of enterprise data management and artificial intelligence.
Defining Vertex AI Search
Vertex AI Search is a service integrated into Google Cloud's Vertex AI Agent Builder. Its primary function is to equip developers with the tools to create secure, high-quality search experiences comparable to Google's own, tailored for a wide array of applications. These applications span public-facing websites, internal corporate intranets, and, significantly, serve as the foundation for Retrieval Augmented Generation (RAG) systems that power generative AI agents and applications. The service achieves this by amalgamating deep information retrieval techniques, advanced natural language processing (NLP), and the latest innovations in large language model (LLM) processing. This combination allows Vertex AI Search to more accurately understand user intent and deliver the most pertinent results, marking a departure from traditional keyword-based search towards more sophisticated semantic and conversational search paradigms.  
Strategic Position within Google Cloud AI Ecosystem
The service is not a standalone product but a core element of Vertex AI, Google Cloud's comprehensive and unified machine learning platform. This integration is crucial, as Vertex AI Search leverages and interoperates with other Vertex AI tools and services. Notable among these are Document AI, which facilitates the processing and understanding of diverse document formats , and direct access to Google's powerful foundation models, including the multimodal Gemini family. Its incorporation within the Vertex AI Agent Builder further underscores Google's strategy to provide an end-to-end toolkit for constructing advanced AI agents and applications, where robust search and retrieval capabilities are fundamental.  
Core Purpose and Value Proposition
The fundamental aim of Vertex AI Search is to empower enterprises to construct search applications of Google's caliber, operating over their own controlled datasets, which can encompass both structured and unstructured information. A central pillar of its value proposition is its capacity to function as an "out-of-the-box" RAG system. This feature is critical for grounding LLM responses in an enterprise's specific data, a process that significantly improves the accuracy, reliability, and contextual relevance of AI-generated content, thereby reducing the propensity for LLMs to produce "hallucinations" or factually incorrect statements. The simplification of the intricate workflows typically associated with RAG systems—including Extract, Transform, Load (ETL) processes, Optical Character Recognition (OCR), data chunking, embedding generation, and indexing—is a major attraction for businesses.  
Moreover, Vertex AI Search extends its utility through specialized, pre-tuned offerings designed for specific industries such as retail (Vertex AI Search for Commerce), media and entertainment (Vertex AI Search for Media), and healthcare and life sciences. These tailored solutions are engineered to address the unique terminologies, data structures, and operational requirements prevalent in these sectors.  
The pronounced emphasis on "out-of-the-box RAG" and the simplification of data processing pipelines points towards a deliberate strategy by Google to lower the entry barrier for enterprises seeking to leverage advanced Generative AI capabilities. Many organizations may lack the specialized AI talent or resources to build such systems from the ground up. Vertex AI Search offers a managed, pre-configured solution, effectively democratizing access to sophisticated RAG technology. By making these capabilities more accessible, Google is not merely selling a search product; it is positioning Vertex AI Search as a foundational layer for a new wave of enterprise AI applications. This approach encourages broader adoption of Generative AI within businesses by mitigating some inherent risks, like LLM hallucinations, and reducing technical complexities. This, in turn, is likely to drive increased consumption of other Google Cloud services, such as storage, compute, and LLM APIs, fostering a more integrated and potentially "sticky" ecosystem.  
Furthermore, Vertex AI Search serves as a conduit between traditional enterprise search mechanisms and the frontier of advanced AI. It is built upon "Google's deep expertise and decades of experience in semantic search technologies" , while concurrently incorporating "the latest in large language model (LLM) processing" and "Gemini generative AI". This dual nature allows it to support conventional search use cases, such as website and intranet search , alongside cutting-edge AI applications like RAG for generative AI agents and conversational AI systems. This design provides an evolutionary pathway for enterprises. Organizations can commence by enhancing existing search functionalities and then progressively adopt more advanced AI features as their internal AI maturity and comfort levels grow. This adaptability makes Vertex AI Search an attractive proposition for a diverse range of customers with varying immediate needs and long-term AI ambitions. Such an approach enables Google to capture market share in both the established enterprise search market and the rapidly expanding generative AI application platform market. It offers a smoother transition for businesses, diminishing the perceived risk of adopting state-of-the-art AI by building upon familiar search paradigms, thereby future-proofing their investment.  
3. Core Capabilities and Architecture
Vertex AI Search is engineered with a rich set of features and a flexible architecture designed to handle diverse enterprise data and power sophisticated search and AI applications. Its capabilities span from foundational search quality to advanced generative AI enablement, supported by robust data handling mechanisms and extensive customization options.
Key Features
Vertex AI Search integrates several core functionalities that define its power and versatility:
Google-Quality Search: At its heart, the service leverages Google's profound experience in semantic search technologies. This foundation aims to deliver highly relevant search results across a wide array of content types, moving beyond simple keyword matching to incorporate advanced natural language understanding (NLU) and contextual awareness.  
Out-of-the-Box Retrieval Augmented Generation (RAG): A cornerstone feature is its ability to simplify the traditionally complex RAG pipeline. Processes such as ETL, OCR, document chunking, embedding generation, indexing, storage, information retrieval, and summarization are streamlined, often requiring just a few clicks to configure. This capability is paramount for grounding LLM responses in enterprise-specific data, which significantly enhances the trustworthiness and accuracy of generative AI applications.  
Document Understanding: The service benefits from integration with Google's Document AI suite, enabling sophisticated processing of both structured and unstructured documents. This allows for the conversion of raw documents into actionable data, including capabilities like layout parsing and entity extraction.  
Vector Search: Vertex AI Search incorporates powerful vector search technology, essential for modern embeddings-based applications. While it offers out-of-the-box embedding generation and automatic fine-tuning, it also provides flexibility for advanced users. They can utilize custom embeddings and gain direct control over the underlying vector database for specialized use cases such as recommendation engines and ad serving. Recent enhancements include the ability to create and deploy indexes without writing code, and a significant reduction in indexing latency for smaller datasets, from hours down to minutes. However, it's important to note user feedback regarding Vector Search, which has highlighted concerns about operational costs (e.g., the need to keep compute resources active even when not querying), limitations with certain file types (e.g., .xlsx), and constraints on embedding dimensions for specific corpus configurations. This suggests a balance to be struck between the power of Vector Search and its operational overhead and flexibility.  
Generative AI Features: The platform is designed to enable grounded answers by synthesizing information from multiple sources. It also supports the development of conversational AI capabilities , often powered by advanced models like Google's Gemini.  
Comprehensive APIs: For developers who require fine-grained control or are building bespoke RAG solutions, Vertex AI Search exposes a suite of APIs. These include APIs for the Document AI Layout Parser, ranking algorithms, grounded generation, and the check grounding API, which verifies the factual basis of generated text.  
Data Handling
Effective data management is crucial for any search system. Vertex AI Search provides several mechanisms for ingesting, storing, and organizing data:
Supported Data Sources:
Websites: Content can be indexed by simply providing site URLs.  
Structured Data: The platform supports data from BigQuery tables and NDJSON files, enabling hybrid search (a combination of keyword and semantic search) or recommendation systems. Common examples include product catalogs, movie databases, or professional directories.  
Unstructured Data: Documents in various formats (PDF, DOCX, etc.) and images can be ingested for hybrid search. Use cases include searching through private repositories of research publications or financial reports. Notably, some limitations, such as lack of support for .xlsx files, have been reported specifically for Vector Search.  
Healthcare Data: FHIR R4 formatted data, often imported from the Cloud Healthcare API, can be used to enable hybrid search over clinical data and patient records.  
Media Data: A specialized structured data schema is available for the media industry, catering to content like videos, news articles, music tracks, and podcasts.  
Third-party Data Sources: Vertex AI Search offers connectors (some in Preview) to synchronize data from various third-party applications, such as Jira, Confluence, and Salesforce, ensuring that search results reflect the latest information from these systems.  
Data Stores and Apps: A fundamental architectural concept in Vertex AI Search is the one-to-one relationship between an "app" (which can be a search or a recommendations app) and a "data store". Data is imported into a specific data store, where it is subsequently indexed. The platform provides different types of data stores, each optimized for a particular kind of data (e.g., website content, structured data, unstructured documents, healthcare records, media assets).  
Indexing and Corpus: The term "corpus" refers to the underlying storage and indexing mechanism within Vertex AI Search. Even when users interact with data stores, which act as an abstraction layer, the corpus is the foundational component where data is stored and processed. It is important to understand that costs are associated with the corpus, primarily driven by the volume of indexed data, the amount of storage consumed, and the number of queries processed.  
Schema Definition: Users have the ability to define a schema that specifies which metadata fields from their documents should be indexed. This schema also helps in understanding the structure of the indexed documents.  
Real-time Ingestion: For datasets that change frequently, Vertex AI Search supports real-time ingestion. This can be implemented using a Pub/Sub topic to publish notifications about new or updated documents. A Cloud Function can then subscribe to this topic and use the Vertex AI Search API to ingest, update, or delete documents in the corresponding data store, thereby maintaining data freshness. This is a critical feature for dynamic environments.  
Automated Processing for RAG: When used for Retrieval Augmented Generation, Vertex AI Search automates many of the complex data processing steps, including ETL, OCR, document chunking, embedding generation, and indexing.  
The "corpus" serves as the foundational layer for both storage and indexing, and its management has direct cost implications. While data stores provide a user-friendly abstraction, the actual costs are tied to the size of this underlying corpus and the activity it handles. This means that effective data management strategies, such as determining what data to index and defining retention policies, are crucial for optimizing costs, even with the simplified interface of data stores. The "pay only for what you use" principle is directly linked to the activity and volume within this corpus. For large-scale deployments, particularly those involving substantial datasets like the 500GB use case mentioned by a user , the cost implications of the corpus can be a significant planning factor.  
There is an observable interplay between the platform's "out-of-the-box" simplicity and the requirements of advanced customization. Vertex AI Search is heavily promoted for its ease of setup and pre-built RAG capabilities , with an emphasis on an "easy experience to get started". However, highly specific enterprise scenarios or complex user requirements—such as querying by unique document identifiers, maintaining multi-year conversational contexts, needing specific embedding dimensions, or handling unsupported file formats like XLSX —may necessitate delving into more intricate configurations, API utilization, and custom development work. For example, implementing real-time ingestion requires setting up Pub/Sub and Cloud Functions , and achieving certain filtering behaviors might involve workarounds like using metadata fields. While comprehensive APIs are available for "granular control or bespoke RAG solutions" , this means that the platform's inherent simplicity has boundaries, and deep technical expertise might still be essential for optimal or highly tailored implementations. This suggests a tiered user base: one that leverages Vertex AI Search as a turnkey solution, and another that uses it as a powerful, extensible toolkit for custom builds.  
Querying and Customization
Vertex AI Search provides flexible ways to query data and customize the search experience:
Query Types: The platform supports Google-quality search, which represents an evolution from basic keyword matching to modern, conversational search experiences. It can be configured to return only a list of search results or to provide generative, AI-powered answers. A recent user-reported issue (May 2025) indicated that queries against JSON data in the latest release might require phrasing in natural language, suggesting an evolving query interpretation mechanism that prioritizes NLU.  
Customization Options:
Vertex AI Search offers extensive capabilities to tailor search experiences to specific needs.  
Metadata Filtering: A key customization feature is the ability to filter search results based on indexed metadata fields. For instance, if direct filtering by rag_file_ids is not supported by a particular API (like the Grounding API), adding a file_id to document metadata and filtering on that field can serve as an effective alternative.  
Search Widget: Integration into websites can be achieved easily by embedding a JavaScript widget or an HTML component.  
API Integration: For more profound control and custom integrations, the AI Applications API can be used.  
LLM Feature Activation: Features that provide generative answers powered by LLMs typically need to be explicitly enabled.  
Refinement Options: Users can preview search results and refine them by adding or modifying metadata (e.g., based on HTML structure for websites), boosting the ranking of certain results (e.g., based on publication date), or applying filters (e.g., based on URL patterns or other metadata).  
Events-based Reranking and Autocomplete: The platform also supports advanced tuning options such as reranking results based on user interaction events and providing autocomplete suggestions for search queries.  
Multi-Turn Conversation Support:
For conversational AI applications, the Grounding API can utilize the history of a conversation as context for generating subsequent responses.  
To maintain context in multi-turn dialogues, it is recommended to store previous prompts and responses (e.g., in a database or cache) and include this history in the next prompt to the model, while being mindful of the context window limitations of the underlying LLMs.  
The evolving nature of query interpretation, particularly the reported shift towards requiring natural language queries for JSON data , underscores a broader trend. If this change is indicative of a deliberate platform direction, it signals a significant alignment of the query experience with Google's core strengths in NLU and conversational AI, likely driven by models like Gemini. This could simplify interactions for end-users but may require developers accustomed to more structured query languages for structured data to adapt their approaches. Such a shift prioritizes natural language understanding across the platform. However, it could also introduce friction for existing applications or development teams that have built systems based on previous query behaviors. This highlights the dynamic nature of managed services, where underlying changes can impact functionality, necessitating user adaptation and diligent monitoring of release notes.  
4. Applications and Use Cases
Vertex AI Search is designed to cater to a wide spectrum of applications, from enhancing traditional enterprise search to enabling sophisticated generative AI solutions across various industries. Its versatility allows organizations to leverage their data in novel and impactful ways.
Enterprise Search
A primary application of Vertex AI Search is the modernization and improvement of search functionalities within an organization:
Improving Search for Websites and Intranets: The platform empowers businesses to deploy Google-quality search capabilities on their external-facing websites and internal corporate portals or intranets. This can significantly enhance user experience by making information more discoverable. For basic implementations, this can be as straightforward as integrating a pre-built search widget.  
Employee and Customer Search: Vertex AI Search provides a comprehensive toolkit for accessing, processing, and analyzing enterprise information. This can be used to create powerful search experiences for employees, helping them find internal documents, locate subject matter experts, or access company knowledge bases more efficiently. Similarly, it can improve customer-facing search for product discovery, support documentation, or FAQs.  
Generative AI Enablement
Vertex AI Search plays a crucial role in the burgeoning field of generative AI by providing essential grounding capabilities:
Grounding LLM Responses (RAG): A key and frequently highlighted use case is its function as an out-of-the-box Retrieval Augmented Generation (RAG) system. In this capacity, Vertex AI Search retrieves relevant and factual information from an organization's own data repositories. This retrieved information is then used to "ground" the responses generated by Large Language Models (LLMs). This process is vital for improving the accuracy, reliability, and contextual relevance of LLM outputs, and critically, for reducing the incidence of "hallucinations"—the tendency of LLMs to generate plausible but incorrect or fabricated information.  
Powering Generative AI Agents and Apps: By providing robust grounding capabilities, Vertex AI Search serves as a foundational component for building sophisticated generative AI agents and applications. These AI systems can then interact with and reason about company-specific data, leading to more intelligent and context-aware automated solutions.  
Industry-Specific Solutions
Recognizing that different industries have unique data types, terminologies, and objectives, Google Cloud offers specialized versions of Vertex AI Search:
Vertex AI Search for Commerce (Retail): This version is specifically tuned to enhance the search, product recommendation, and browsing experiences on retail e-commerce channels. It employs AI to understand complex customer queries, interpret shopper intent (even when expressed using informal language or colloquialisms), and automatically provide dynamic spell correction and relevant synonym suggestions. Furthermore, it can optimize search results based on specific business objectives, such as click-through rates (CTR), revenue per session, and conversion rates.  
Vertex AI Search for Media (Media and Entertainment): Tailored for the media industry, this solution aims to deliver more personalized content recommendations, often powered by generative AI. The strategic goal is to increase consumer engagement and time spent on media platforms, which can translate to higher advertising revenue, subscription retention, and overall platform loyalty. It supports structured data formats commonly used in the media sector for assets like videos, news articles, music, and podcasts.  
Vertex AI Search for Healthcare and Life Sciences: This offering provides a medically tuned search engine designed to improve the experiences of both patients and healthcare providers. It can be used, for example, to search through vast clinical data repositories, electronic health records, or a patient's clinical history using exploratory queries. This solution is also built with compliance with healthcare data regulations like HIPAA in mind.  
The development of these industry-specific versions like "Vertex AI Search for Commerce," "Vertex AI Search for Media," and "Vertex AI Search for Healthcare and Life Sciences" is not merely a cosmetic adaptation. It represents a strategic decision by Google to avoid a one-size-fits-all approach. These offerings are "tuned for unique industry requirements" , incorporating specialized terminologies, understanding industry-specific data structures, and aligning with distinct business objectives. This targeted approach significantly lowers the barrier to adoption for companies within these verticals, as the solution arrives pre-optimized for their particular needs, thereby reducing the requirement for extensive custom development or fine-tuning. This industry-specific strategy serves as a potent market penetration tactic, allowing Google to compete more effectively against niche players in each vertical and to demonstrate clear return on investment by addressing specific, high-value industry challenges. It also fosters deeper integration into the core business processes of these enterprises, positioning Vertex AI Search as a more strategic and less easily substitutable component of their technology infrastructure. This could, over time, lead to the development of distinct, industry-focused data ecosystems and best practices centered around Vertex AI Search.  
Embeddings-Based Applications (via Vector Search)
The underlying Vector Search capability within Vertex AI Search also enables a range of applications that rely on semantic similarity of embeddings:
Recommendation Engines: Vector Search can be a core component in building recommendation engines. By generating numerical representations (embeddings) of items (e.g., products, articles, videos), it can find and suggest items that are semantically similar to what a user is currently viewing or has interacted with in the past.  
Chatbots: For advanced chatbots that need to understand user intent deeply and retrieve relevant information from extensive knowledge bases, Vector Search provides powerful semantic matching capabilities. This allows chatbots to provide more accurate and contextually appropriate responses.  
Ad Serving: In the domain of digital advertising, Vector Search can be employed for semantic matching to deliver more relevant advertisements to users based on content or user profiles.  
The Vector Search component is presented both as an integral technology powering the semantic retrieval within the managed Vertex AI Search service and as a potent, standalone tool accessible via the broader Vertex AI platform. Snippet , for instance, outlines a methodology for constructing a recommendation engine using Vector Search directly. This dual role means that Vector Search is foundational to the core semantic retrieval capabilities of Vertex AI Search, and simultaneously, it is a powerful component that can be independently leveraged by developers to build other custom AI applications. Consequently, enhancements to Vector Search, such as the recently reported reductions in indexing latency , benefit not only the out-of-the-box Vertex AI Search experience but also any custom AI solutions that developers might construct using this underlying technology. Google is, in essence, offering a spectrum of access to its vector database technology. Enterprises can consume it indirectly and with ease through the managed Vertex AI Search offering, or they can harness it more directly for bespoke AI projects. This flexibility caters to varying levels of technical expertise and diverse application requirements. As more enterprises adopt embeddings for a multitude of AI tasks, a robust, scalable, and user-friendly Vector Search becomes an increasingly critical piece of infrastructure, likely driving further adoption of the entire Vertex AI ecosystem.  
Document Processing and Analysis
Leveraging its integration with Document AI, Vertex AI Search offers significant capabilities in document processing:
The service can help extract valuable information, classify documents based on content, and split large documents into manageable chunks. This transforms static documents into actionable intelligence, which can streamline various business workflows and enable more data-driven decision-making. For example, it can be used for analyzing large volumes of textual data, such as customer feedback, product reviews, or research papers, to extract key themes and insights.  
Case Studies (Illustrative Examples)
While specific case studies for "Vertex AI Search" are sometimes intertwined with broader "Vertex AI" successes, several examples illustrate the potential impact of AI grounded on enterprise data, a core principle of Vertex AI Search:
Genial Care (Healthcare): This organization implemented Vertex AI to improve the process of keeping session records for caregivers. This enhancement significantly aided in reviewing progress for autism care, demonstrating Vertex AI's value in managing and utilizing healthcare-related data.  
AES (Manufacturing & Industrial): AES utilized generative AI agents, built with Vertex AI, to streamline energy safety audits. This application resulted in a remarkable 99% reduction in costs and a decrease in audit completion time from 14 days to just one hour. This case highlights the transformative potential of AI agents that are effectively grounded on enterprise-specific information, aligning closely with the RAG capabilities central to Vertex AI Search.  
Xometry (Manufacturing): This company is reported to be revolutionizing custom manufacturing processes by leveraging Vertex AI.  
LUXGEN (Automotive): LUXGEN employed Vertex AI to develop an AI-powered chatbot. This initiative led to improvements in both the car purchasing and driving experiences for customers, while also achieving a 30% reduction in customer service workloads.  
These examples, though some may refer to the broader Vertex AI platform, underscore the types of business outcomes achievable when AI is effectively applied to enterprise data and processes—a domain where Vertex AI Search is designed to excel.
5. Implementation and Management Considerations
Successfully deploying and managing Vertex AI Search involves understanding its setup processes, data ingestion mechanisms, security features, and user access controls. These aspects are critical for ensuring the platform operates efficiently, securely, and in alignment with enterprise requirements.
Setup and Deployment
Vertex AI Search offers flexibility in how it can be implemented and integrated into existing systems:
Google Cloud Console vs. API: Implementation can be approached in two main ways. The Google Cloud console provides a web-based interface for a quick-start experience, allowing users to create applications, import data, test search functionality, and view analytics without extensive coding. Alternatively, for deeper integration into websites or custom applications, the AI Applications API offers programmatic control. A common practice is a hybrid approach, where initial setup and data management are performed via the console, while integration and querying are handled through the API.  
App and Data Store Creation: The typical workflow begins with creating a search or recommendations "app" and then attaching it to a "data store." Data relevant to the application is then imported into this data store and subsequently indexed to make it searchable.  
Embedding JavaScript Widgets: For straightforward website integration, Vertex AI Search provides embeddable JavaScript widgets and API samples. These allow developers to quickly add search or recommendation functionalities to their web pages as HTML components.  
Data Ingestion and Management
The platform provides robust mechanisms for ingesting data from various sources and keeping it up-to-date:
Corpus Management: As previously noted, the "corpus" is the fundamental underlying storage and indexing layer. While data stores offer an abstraction, it is crucial to understand that costs are directly related to the volume of data indexed in the corpus, the storage it consumes, and the query load it handles.  
Pub/Sub for Real-time Updates: For environments with dynamic datasets where information changes frequently, Vertex AI Search supports real-time updates. This is typically achieved by setting up a Pub/Sub topic to which notifications about new or modified documents are published. A Cloud Function, acting as a subscriber to this topic, can then use the Vertex AI Search API to ingest, update, or delete the corresponding documents in the data store. This architecture ensures that the search index remains fresh and reflects the latest information. The capacity for real-time ingestion via Pub/Sub and Cloud Functions is a significant feature. This capability distinguishes it from systems reliant solely on batch indexing, which may not be adequate for environments with rapidly changing information. Real-time ingestion is vital for use cases where data freshness is paramount, such as e-commerce platforms with frequently updated product inventories, news portals, live financial data feeds, or internal systems tracking real-time operational metrics. Without this, search results could quickly become stale and potentially misleading. This feature substantially broadens the applicability of Vertex AI Search, positioning it as a viable solution for dynamic, operational systems where search must accurately reflect the current state of data. However, implementing this real-time pipeline introduces additional architectural components (Pub/Sub topics, Cloud Functions) and associated costs, which organizations must consider in their planning. It also implies a need for robust monitoring of the ingestion pipeline to ensure its reliability.  
Metadata for Filtering and Control: During the schema definition process, specific metadata fields can be designated for indexing. This indexed metadata is critical for enabling powerful filtering of search results. For example, if an application requires users to search within a specific subset of documents identified by a unique ID, and direct filtering by a system-generated rag_file_id is not supported in a particular API context, a workaround involves adding a custom file_id field to each document's metadata. This custom field can then be used as a filter criterion during search queries.  
Data Connectors: To facilitate the ingestion of data from a variety of sources, including first-party systems, other Google services, and third-party applications (such as Jira, Confluence, and Salesforce), Vertex AI Search offers data connectors. These connectors provide read-only access to external applications and help ensure that the data within the search index remains current and synchronized with these source systems.  
Security and Compliance
Google Cloud places a strong emphasis on security and compliance for its services, and Vertex AI Search incorporates several features to address these enterprise needs:
Data Privacy: A core tenet is that user data ingested into Vertex AI Search is secured within the customer's dedicated cloud instance. Google explicitly states that it does not access or use this customer data for training its general-purpose models or for any other unauthorized purposes.  
Industry Compliance: Vertex AI Search is designed to adhere to various recognized industry standards and regulations. These include HIPAA (Health Insurance Portability and Accountability Act) for healthcare data, the ISO 27000-series for information security management, and SOC (System and Organization Controls) attestations (SOC-1, SOC-2, SOC-3). This compliance is particularly relevant for the specialized versions of Vertex AI Search, such as the one for Healthcare and Life Sciences.  
Access Transparency: This feature, when enabled, provides customers with logs of actions taken by Google personnel if they access customer systems (typically for support purposes), offering a degree of visibility into such interactions.  
Virtual Private Cloud (VPC) Service Controls: To enhance data security and prevent unauthorized data exfiltration or infiltration, customers can use VPC Service Controls to define security perimeters around their Google Cloud resources, including Vertex AI Search.  
Customer-Managed Encryption Keys (CMEK): Available in Preview, CMEK allows customers to use their own cryptographic keys (managed through Cloud Key Management Service) to encrypt data at rest within Vertex AI Search. This gives organizations greater control over their data's encryption.  
User Access and Permissions (IAM)
Proper configuration of Identity and Access Management (IAM) permissions is fundamental to securing Vertex AI Search and ensuring that users only have access to appropriate data and functionalities:
Effective IAM policies are critical. However, some users have reported encountering challenges when trying to identify and configure the specific "Discovery Engine search permissions" required for Vertex AI Search. Difficulties have been noted in determining factors such as principal access boundaries or the impact of deny policies, even when utilizing tools like the IAM Policy Troubleshooter. This suggests that the permission model can be granular and may require careful attention to detail and potentially specialized knowledge to implement correctly, especially for complex scenarios involving fine-grained access control.  
The power of Vertex AI Search lies in its capacity to index and make searchable vast quantities of potentially sensitive enterprise data drawn from diverse sources. While Google Cloud provides a robust suite of security features like VPC Service Controls and CMEK , the responsibility for meticulous IAM configuration and overarching data governance rests heavily with the customer. The user-reported difficulties in navigating IAM permissions for "Discovery Engine search permissions" underscore that the permission model, while offering granular control, might also present complexity. Implementing a least-privilege access model effectively, especially when dealing with nuanced requirements such as filtering search results based on user identity or specific document IDs , may require specialized expertise. Failure to establish and maintain correct IAM policies could inadvertently lead to security vulnerabilities or compliance breaches, thereby undermining the very benefits the search platform aims to provide. Consequently, the "ease of use" often highlighted for search setup must be counterbalanced with rigorous and continuous attention to security and access control from the outset of any deployment. The platform's capability to filter search results based on metadata becomes not just a functional feature but a key security control point if designed and implemented with security considerations in mind.  
6. Pricing and Commercials
Understanding the pricing structure of Vertex AI Search is essential for organizations evaluating its adoption and for ongoing cost management. The model is designed around the principle of "pay only for what you use" , offering flexibility but also requiring careful consideration of various cost components. Google Cloud typically provides a free trial, often including $300 in credits for new customers to explore services. Additionally, a free tier is available for some services, notably a 10 GiB per month free quota for Index Data Storage, which is shared across AI Applications.  
The pricing for Vertex AI Search can be broken down into several key areas:
Core Search Editions and Query Costs
Search Standard Edition: This edition is priced based on the number of queries processed, typically per 1,000 queries. For example, a common rate is $1.50 per 1,000 queries.  
Search Enterprise Edition: This edition includes Core Generative Answers (AI Mode) and is priced at a higher rate per 1,000 queries, such as $4.00 per 1,000 queries.  
Advanced Generative Answers (AI Mode): This is an optional add-on available for both Standard and Enterprise Editions. It incurs an additional cost per 1,000 user input queries, for instance, an extra $4.00 per 1,000 user input queries.  
Data Indexing Costs
Index Storage: Costs for storing indexed data are charged per GiB of raw data per month. A typical rate is $5.00 per GiB per month. As mentioned, a free quota (e.g., 10 GiB per month) is usually provided. This cost is directly associated with the underlying "corpus" where data is stored and managed.  
Grounding and Generative AI Cost Components
When utilizing the generative AI capabilities, particularly for grounding LLM responses, several components contribute to the overall cost :  
Input Prompt (for grounding): The cost is determined by the number of characters in the input prompt provided for the grounding process, including any grounding facts. An example rate is $0.000125 per 1,000 characters.
Output (generated by model): The cost for the output generated by the LLM is also based on character count. An example rate is $0.000375 per 1,000 characters.
Grounded Generation (for grounding on own retrieved data): There is a cost per 1,000 requests for utilizing the grounding functionality itself, for example, $2.50 per 1,000 requests.
Data Retrieval (Vertex AI Search - Enterprise edition): When Vertex AI Search (Enterprise edition) is used to retrieve documents for grounding, a query cost applies, such as $4.00 per 1,000 requests.
Check Grounding API: This API allows users to assess how well a piece of text (an answer candidate) is grounded in a given set of reference texts (facts). The cost is per 1,000 answer characters, for instance, $0.00075 per 1,000 answer characters.  
Industry-Specific Pricing
Vertex AI Search offers specialized pricing for its industry-tailored solutions:
Vertex AI Search for Healthcare: This version has a distinct, typically higher, query cost, such as $20.00 per 1,000 queries. It includes features like GenAI-powered answers and streaming updates to the index, some of which may be in Preview status. Data indexing costs are generally expected to align with standard rates.  
Vertex AI Search for Media:
Media Search API Request Count: A specific query cost applies, for example, $2.00 per 1,000 queries.  
Data Index: Standard data indexing rates, such as $5.00 per GB per month, typically apply.  
Media Recommendations: Pricing for media recommendations is often tiered based on the volume of prediction requests per month (e.g., $0.27 per 1,000 predictions for up to 20 million, $0.18 for the next 280 million, and so on). Additionally, training and tuning of recommendation models are charged per node per hour, for example, $2.50 per node per hour.  
Document AI Feature Pricing (when integrated)
If Vertex AI Search utilizes integrated Document AI features for processing documents, these will incur their own costs:
Enterprise Document OCR Processor: Pricing is typically tiered based on the number of pages processed per month, for example, $1.50 per 1,000 pages for 1 to 5 million pages per month.  
Layout Parser (includes initial chunking): This feature is priced per 1,000 pages, for instance, $10.00 per 1,000 pages.  
Vector Search Cost Considerations
Specific cost considerations apply to Vertex AI Vector Search, particularly highlighted by user feedback :  
A user found Vector Search to be "costly" due to the necessity of keeping compute resources (machines) continuously running for index serving, even during periods of no query activity. This implies ongoing costs for provisioned resources, distinct from per-query charges.  
Supporting documentation confirms this model, with "Index Serving" costs that vary by machine type and region, and "Index Building" costs, such as $3.00 per GiB of data processed.  
Pricing Examples
Illustrative pricing examples provided in sources like and demonstrate how these various components can combine to form the total cost for different usage scenarios, including general availability (GA) search functionality, media recommendations, and grounding operations.  
The following table summarizes key pricing components for Vertex AI Search:
Vertex AI Search Pricing SummaryService ComponentEdition/TypeUnitPrice (Example)Free Tier/NotesSearch QueriesStandard1,000 queries$1.5010k free trial queries often includedSearch QueriesEnterprise (with Core GenAI)1,000 queries$4.0010k free trial queries often includedAdvanced GenAI (Add-on)Standard or Enterprise1,000 user input queries+$4.00Index Data StorageAllGiB/month$5.0010 GiB/month free (shared across AI Applications)Grounding: Input PromptGenerative AI1,000 characters$0.000125Grounding: OutputGenerative AI1,000 characters$0.000375Grounding: Grounded GenerationGenerative AI1,000 requests$2.50For grounding on own retrieved dataGrounding: Data RetrievalEnterprise Search1,000 requests$4.00When using Vertex AI Search (Enterprise) for retrievalCheck Grounding APIAPI1,000 answer characters$0.00075Healthcare Search QueriesHealthcare1,000 queries$20.00Includes some Preview featuresMedia Search API QueriesMedia1,000 queries$2.00Media Recommendations (Predictions)Media1,000 predictions$0.27 (up to 20M/mo), $0.18 (next 280M/mo), $0.10 (after 300M/mo)Tiered pricingMedia Recs Training/TuningMediaNode/hour$2.50Document OCRDocument AI Integration1,000 pages$1.50 (1-5M pages/mo), $0.60 (>5M pages/mo)Tiered pricingLayout ParserDocument AI Integration1,000 pages$10.00Includes initial chunkingVector Search: Index BuildingVector SearchGiB processed$3.00Vector Search: Index ServingVector SearchVariesVaries by machine type & region (e.g., $0.094/node hour for e2-standard-2 in us-central1)Implies "always-on" costs for provisioned resourcesExport to Sheets
Note: Prices are illustrative examples based on provided research and are subject to change. Refer to official Google Cloud pricing documentation for current rates.
The multifaceted pricing structure, with costs broken down by queries, data volume, character counts for generative AI, specific APIs, and even underlying Document AI processors , reflects the feature richness and granularity of Vertex AI Search. This allows users to align costs with the specific features they consume, consistent with the "pay only for what you use" philosophy. However, this granularity also means that accurately estimating total costs can be a complex undertaking. Users must thoroughly understand their anticipated usage patterns across various dimensions—query volume, data size, frequency of generative AI interactions, document processing needs—to predict expenses with reasonable accuracy. The seemingly simple act of obtaining a generative answer, for instance, can involve multiple cost components: input prompt processing, output generation, the grounding operation itself, and the data retrieval query. Organizations, particularly those with large datasets, high query volumes, or plans for extensive use of generative features, may find it challenging to forecast costs without detailed analysis and potentially leveraging tools like the Google Cloud pricing calculator. This complexity could present a barrier for smaller organizations or those with less experience in managing cloud expenditures. It also underscores the importance of closely monitoring usage to prevent unexpected costs. The decision between Standard and Enterprise editions, and whether to incorporate Advanced Generative Answers, becomes a significant cost-benefit analysis.  
Furthermore, a critical aspect of the pricing model for certain high-performance features like Vertex AI Vector Search is the "always-on" cost component. User feedback explicitly noted Vector Search as "costly" due to the requirement to "keep my machine on even when a user ain't querying". This is corroborated by pricing details that list "Index Serving" costs varying by machine type and region , which are distinct from purely consumption-based fees (like per-query charges) where costs would be zero if there were no activity. For features like Vector Search that necessitate provisioned infrastructure for index serving, a baseline operational cost exists regardless of query volume. This is a crucial distinction from on-demand pricing models and can significantly impact the total cost of ownership (TCO) for use cases that rely heavily on Vector Search but may experience intermittent query patterns. This continuous cost for certain features means that organizations must evaluate the ongoing value derived against their persistent expense. It might render Vector Search less economical for applications with very sporadic usage unless the benefits during active periods are substantial. This could also suggest that Google might, in the future, offer different tiers or configurations for Vector Search to cater to varying performance and cost needs, or users might need to architect solutions to de-provision and re-provision indexes if usage is highly predictable and infrequent, though this would add operational complexity.  
7. Comparative Analysis
Vertex AI Search operates in a competitive landscape of enterprise search and AI platforms. Understanding its position relative to alternatives is crucial for informed decision-making. Key comparisons include specialized product discovery solutions like Algolia and broader enterprise search platforms from other major cloud providers and niche vendors.
Vertex AI Search for Commerce vs. Algolia
For e-commerce and retail product discovery, Vertex AI Search for Commerce and Algolia are prominent solutions, each with distinct strengths :  
Core Search Quality & Features:
Vertex AI Search for Commerce is built upon Google's extensive search algorithm expertise, enabling it to excel at interpreting complex queries by understanding user context, intent, and even informal language. It features dynamic spell correction and synonym suggestions, consistently delivering high-quality, context-rich results. Its primary strengths lie in natural language understanding (NLU) and dynamic AI-driven corrections.
Algolia has established its reputation with a strong focus on semantic search and autocomplete functionalities, powered by its NeuralSearch capabilities. It adapts quickly to user intent. However, it may require more manual fine-tuning to address highly complex or context-rich queries effectively. Algolia is often prized for its speed, ease of configuration, and feature-rich autocomplete.
Customer Engagement & Personalization:
Vertex AI incorporates advanced recommendation models that adapt based on user interactions. It can optimize search results based on defined business objectives like click-through rates (CTR), revenue per session, and conversion rates. Its dynamic personalization capabilities mean search results evolve based on prior user behavior, making the browsing experience progressively more relevant. The deep integration of AI facilitates a more seamless, data-driven personalization experience.
Algolia offers an impressive suite of personalization tools with various recommendation models suitable for different retail scenarios. The platform allows businesses to customize search outcomes through configuration, aligning product listings, faceting, and autocomplete suggestions with their customer engagement strategy. However, its personalization features might require businesses to integrate additional services or perform more fine-tuning to achieve the level of dynamic personalization seen in Vertex AI.
Merchandising & Display Flexibility:
Vertex AI utilizes extensive AI models to enable dynamic ranking configurations that consider not only search relevance but also business performance metrics such as profitability and conversion data. The search engine automatically sorts products by match quality and considers which products are likely to drive the best business outcomes, reducing the burden on retail teams by continuously optimizing based on live data. It can also blend search results with curated collections and themes. A noted current limitation is that Google is still developing new merchandising tools, and the existing toolset is described as "fairly limited".  
Algolia offers powerful faceting and grouping capabilities, allowing for the creation of curated displays for promotions, seasonal events, or special collections. Its flexible configuration options permit merchants to manually define boost and slotting rules to prioritize specific products for better visibility. These manual controls, however, might require more ongoing maintenance compared to Vertex AI's automated, outcome-based ranking. Algolia's configuration-centric approach may be better suited for businesses that prefer hands-on control over merchandising details.
Implementation, Integration & Operational Efficiency:
A key advantage of Vertex AI is its seamless integration within the broader Google Cloud ecosystem, making it a natural choice for retailers already utilizing Google Merchant Center, Google Cloud Storage, or BigQuery. Its sophisticated AI models mean that even a simple initial setup can yield high-quality results, with the system automatically learning from user interactions over time. A potential limitation is its significant data requirements; businesses lacking large volumes of product or interaction data might not fully leverage its advanced capabilities, and smaller brands may find themselves in lower Data Quality tiers.  
Algolia is renowned for its ease of use and rapid deployment, offering a user-friendly interface, comprehensive documentation, and a free tier suitable for early-stage projects. It is designed to integrate with various e-commerce systems and provides a flexible API for straightforward customization. While simpler and more accessible for smaller businesses, this ease of use might necessitate additional configuration for very complex or data-intensive scenarios.
Analytics, Measurement & Future Innovations:
Vertex AI provides extensive insights into both search performance and business outcomes, tracking metrics like CTR, conversion rates, and profitability. The ability to export search and event data to BigQuery enhances its analytical power, offering possibilities for custom dashboards and deeper AI/ML insights. It is well-positioned to benefit from Google's ongoing investments in AI, integration with services like Google Vision API, and the evolution of large language models and conversational commerce.
Algolia offers detailed reporting on search performance, tracking visits, searches, clicks, and conversions, and includes views for data quality monitoring. Its analytics capabilities tend to focus more on immediate search performance rather than deeper business performance metrics like average order value or revenue impact. Algolia is also rapidly innovating, especially in enhancing its semantic search and autocomplete functions, though its evolution may be more incremental compared to Vertex AI's broader ecosystem integration.
In summary, Vertex AI Search for Commerce is often an ideal choice for large retailers with extensive datasets, particularly those already integrated into the Google or Shopify ecosystems, who are seeking advanced AI-driven optimization for customer engagement and business outcomes. Conversely, Algolia presents a strong option for businesses that prioritize rapid deployment, ease of use, and flexible semantic search and autocomplete functionalities, especially smaller retailers or those desiring more hands-on control over their search configuration.
Vertex AI Search vs. Other Enterprise Search Solutions
Beyond e-commerce, Vertex AI Search competes with a range of enterprise search solutions :  
INDICA Enterprise Search: This solution utilizes a patented approach to index both structured and unstructured data, prioritizing results by relevance. It offers a sophisticated query builder and comprehensive filtering options. Both Vertex AI Search and INDICA Enterprise Search provide API access, free trials/versions, and similar deployment and support options. INDICA lists "Sensitive Data Discovery" as a feature, while Vertex AI Search highlights "eCommerce Search, Retrieval-Augmented Generation (RAG), Semantic Search, and Site Search" as additional capabilities. Both platforms integrate with services like Gemini, Google Cloud Document AI, Google Cloud Platform, HTML, and Vertex AI.  
Azure AI Search: Microsoft's offering features a vector database specifically designed for advanced RAG and contemporary search functionalities. It emphasizes enterprise readiness, incorporating security, compliance, and ethical AI methodologies. Azure AI Search supports advanced retrieval techniques, integrates with various platforms and data sources, and offers comprehensive vector data processing (extraction, chunking, enrichment, vectorization). It supports diverse vector types, hybrid models, multilingual capabilities, metadata filtering, and extends beyond simple vector searches to include keyword match scoring, reranking, geospatial search, and autocomplete features. The strong emphasis on RAG and vector capabilities by both Vertex AI Search and Azure AI Search positions them as direct competitors in the AI-powered enterprise search market.  
IBM Watson Discovery: This platform leverages AI-driven search to extract precise answers and identify trends from various documents and websites. It employs advanced NLP to comprehend industry-specific terminology, aiming to reduce research time significantly by contextualizing responses and citing source documents. Watson Discovery also uses machine learning to visually categorize text, tables, and images. Its focus on deep NLP and understanding industry-specific language mirrors claims made by Vertex AI, though Watson Discovery has a longer established presence in this particular enterprise AI niche.  
Guru: An AI search and knowledge platform, Guru delivers trusted information from a company's scattered documents, applications, and chat platforms directly within users' existing workflows. It features a personalized AI assistant and can serve as a modern replacement for legacy wikis and intranets. Guru offers extensive native integrations with popular business tools like Slack, Google Workspace, Microsoft 365, Salesforce, and Atlassian products. Guru's primary focus on knowledge management and in-app assistance targets a potentially more specialized use case than the broader enterprise search capabilities of Vertex AI, though there is an overlap in accessing and utilizing internal knowledge.  
AddSearch: Provides fast, customizable site search for websites and web applications, using a crawler or an Indexing API. It offers enterprise-level features such as autocomplete, synonyms, ranking tools, and progressive ranking, designed to scale from small businesses to large corporations.  
Haystack: Aims to connect employees with the people, resources, and information they need. It offers intranet-like functionalities, including custom branding, a modular layout, multi-channel content delivery, analytics, knowledge sharing features, and rich employee profiles with a company directory.  
Atolio: An AI-powered enterprise search engine designed to keep data securely within the customer's own cloud environment (AWS, Azure, or GCP). It provides intelligent, permission-based responses and ensures that intellectual property remains under control, with LLMs that do not train on customer data. Atolio integrates with tools like Office 365, Google Workspace, Slack, and Salesforce. A direct comparison indicates that both Atolio and Vertex AI Search offer similar deployment, support, and training options, and share core features like AI/ML, faceted search, and full-text search. Vertex AI Search additionally lists RAG, Semantic Search, and Site Search as features not specified for Atolio in that comparison.  
The following table provides a high-level feature comparison:
Feature and Capability Comparison: Vertex AI Search vs. Key CompetitorsFeature/CapabilityVertex AI SearchAlgolia (Commerce)Azure AI SearchIBM Watson DiscoveryINDICA ESGuruAtolioPrimary FocusEnterprise Search + RAG, Industry SolutionsProduct Discovery, E-commerce SearchEnterprise Search + RAG, Vector DBNLP-driven Insight Extraction, Document AnalysisGeneral Enterprise Search, Data DiscoveryKnowledge Management, In-App SearchSecure Enterprise Search, Knowledge Discovery (Self-Hosted Focus)RAG CapabilitiesOut-of-the-box, Custom via APIsN/A (Focus on product search)Strong, Vector DB optimized for RAGDocument understanding supports RAG-like patternsAI/ML features, less explicit RAG focusSurfaces existing knowledge, less about new content generationAI-powered answers, less explicit RAG focusVector SearchYes, integrated & standaloneSemantic search (NeuralSearch)Yes, core feature (Vector Database)Semantic understanding, less focus on explicit vector DBAI/Machine LearningAI-powered searchAI-powered searchSemantic Search QualityHigh (Google tech)High (NeuralSearch)HighHigh (Advanced NLP)Relevance-based rankingHigh for knowledge assetsIntelligent responsesSupported Data TypesStructured, Unstructured, Web, Healthcare, MediaPrimarily Product DataStructured, Unstructured, VectorDocuments, WebsitesStructured, UnstructuredDocs, Apps, ChatsEnterprise knowledge base (docs, apps)Industry SpecializationsRetail, Media, HealthcareRetail/E-commerceGeneral PurposeTunable for industry terminologyGeneral PurposeGeneral Knowledge ManagementGeneral Enterprise SearchKey DifferentiatorsGoogle Search tech, Out-of-box RAG, Gemini IntegrationSpeed, Ease of Config, AutocompleteAzure Ecosystem Integration, Comprehensive Vector ToolsDeep NLP, Industry Terminology UnderstandingPatented indexing, Sensitive Data DiscoveryIn-app accessibility, Extensive IntegrationsData security (self-hosted, no LLM training on customer data)Generative AI IntegrationStrong (Gemini, Grounding API)Limited (focus on search relevance)Strong (for RAG with Azure OpenAI)Supports GenAI workflowsAI/ML capabilitiesAI assistant for answersLLM-powered answersPersonalizationAdvanced (AI-driven)Strong (Configurable)Via integration with other Azure servicesN/AN/APersonalized AI assistantN/AEase of ImplementationModerate to Complex (depends on use case)HighModerate to ComplexModerate to ComplexModerateHighModerate (focus on secure deployment)Data Security ApproachGCP Security (VPC-SC, CMEK), Data SegregationStandard SaaS securityAzure Security (Compliance, Ethical AI)IBM Cloud SecurityStandard Enterprise SecurityStandard SaaS securityStrong emphasis on self-hosting & data controlExport to Sheets
The enterprise search market appears to be evolving along two axes: general-purpose platforms that offer a wide array of capabilities, and more specialized solutions tailored to specific use cases or industries. Artificial intelligence, in various forms such as semantic search, NLP, and vector search, is becoming a common denominator across almost all modern offerings. This means customers often face a choice between adopting a best-of-breed specialized tool that excels in a particular area (like Algolia for e-commerce or Guru for internal knowledge management) or investing in a broader platform like Vertex AI Search or Azure AI Search. These platforms provide good-to-excellent capabilities across many domains but might require more customization or configuration to meet highly specific niche requirements. Vertex AI Search, with its combination of a general platform and distinct industry-specific versions, attempts to bridge this gap. The success of this strategy will likely depend on how effectively its specialized versions compete with dedicated niche solutions and how readily the general platform can be adapted for unique needs.  
As enterprises increasingly deploy AI solutions over sensitive proprietary data, concerns regarding data privacy, security, and intellectual property protection are becoming paramount. Vendors are responding by highlighting their security and data governance features as key differentiators. Atolio, for instance, emphasizes that it "keeps data securely within your cloud environment" and that its "LLMs do not train on your data". Similarly, Vertex AI Search details its security measures, including securing user data within the customer's cloud instance, compliance with standards like HIPAA and ISO, and features like VPC Service Controls and Customer-Managed Encryption Keys (CMEK). Azure AI Search also underscores its commitment to "security, compliance, and ethical AI methodologies". This growing focus suggests that the ability to ensure data sovereignty, meticulously control data access, and prevent data leakage or misuse by AI models is becoming as critical as search relevance or operational speed. For customers, particularly those in highly regulated industries, these data governance and security aspects could become decisive factors when selecting an enterprise search solution, potentially outweighing minor differences in other features. The often "black box" nature of some AI models makes transparent data handling policies and robust security postures increasingly crucial.  
8. Known Limitations, Challenges, and User Experiences
While Vertex AI Search offers powerful capabilities, user experiences and technical reviews have highlighted several limitations, challenges, and considerations that organizations should be aware of during evaluation and implementation.
Reported User Issues and Challenges
Direct user feedback and community discussions have surfaced specific operational issues:
"No results found" Errors / Inconsistent Search Behavior: A notable user experience involved consistently receiving "No results found" messages within the Vertex AI Search app preview. This occurred even when other members of the same organization could use the search functionality without issue, and IAM and Datastore permissions appeared to be identical for the affected user. Such issues point to potential user-specific, environment-related, or difficult-to-diagnose configuration problems that are not immediately apparent.  
Cross-OS Inconsistencies / Browser Compatibility: The same user reported that following the Vertex AI Search tutorial yielded successful results on a Windows operating system, but attempting the same on macOS resulted in a 403 error during the search operation. This suggests possible browser compatibility problems, issues with cached data, or differences in how the application interacts with various operating systems.  
IAM Permission Complexity: Users have expressed difficulty in accurately confirming specific "Discovery Engine search permissions" even when utilizing the IAM Policy Troubleshooter. There was ambiguity regarding the determination of principal access boundaries, the effect of deny policies, or the final resolution of permissions. This indicates that navigating and verifying the necessary IAM permissions for Vertex AI Search can be a complex undertaking.  
Issues with JSON Data Input / Query Phrasing: A recent issue, reported in May 2025, indicates that the latest release of Vertex AI Search (referred to as AI Application) has introduced challenges with semantic search over JSON data. According to the report, the search engine now primarily processes queries phrased in a natural language style, similar to that used in the UI, rather than structured filter expressions. This means filters or conditions must be expressed as plain language questions (e.g., "How many findings have a severity level marked as HIGH in d3v-core?"). Furthermore, it was noted that sometimes, even when specific keys are designated as "searchable" in the datastore schema, the system fails to return results, causing significant problems for certain types of queries. This represents a potentially disruptive change in behavior for users accustomed to working with JSON data in a more structured query manner.  
Lack of Clear Error Messages: In the scenario where a user consistently received "No results found," it was explicitly stated that "There are no console or network errors". The absence of clear, actionable error messages can significantly complicate and prolong the diagnostic process for such issues.  
Potential Challenges from Technical Specifications and User Feedback
Beyond specific bug reports, technical deep-dives and early adopter feedback have revealed other considerations, particularly concerning the underlying Vector Search component :  
Cost of Vector Search: A user found Vertex AI Vector Search to be "costly." This was attributed to the operational model requiring compute resources (machines) to remain active and provisioned for index serving, even during periods when no queries were being actively processed. This implies a continuous baseline cost associated with using Vector Search.  
File Type Limitations (Vector Search): As of the user's experience documented in , Vertex AI Vector Search did not offer support for indexing .xlsx (Microsoft Excel) files.  
Document Size Limitations (Vector Search): Concerns were raised about the platform's ability to effectively handle "bigger document sizes" within the Vector Search component.  
Embedding Dimension Constraints (Vector Search): The user reported an inability to create a Vector Search index with embedding dimensions other than the default 768 if the "corpus doesn't support" alternative dimensions. This suggests a potential lack of flexibility in configuring embedding parameters for certain setups.  
rag_file_ids Not Directly Supported for Filtering: For applications using the Grounding API, it was noted that direct filtering of results based on rag_file_ids (presumably identifiers for files used in RAG) is not supported. The suggested workaround involves adding a custom file_id to the document metadata and using that for filtering purposes.  
Data Requirements for Advanced Features (Vertex AI Search for Commerce)
For specialized solutions like Vertex AI Search for Commerce, the effectiveness of advanced features can be contingent on the available data:
A potential limitation highlighted for Vertex AI Search for Commerce is its "significant data requirements." Businesses that lack large volumes of product data or user interaction data (e.g., clicks, purchases) might not be able to fully leverage its advanced AI capabilities for personalization and optimization. Smaller brands, in particular, may find themselves remaining in lower Data Quality tiers, which could impact the performance of these features.  
Merchandising Toolset (Vertex AI Search for Commerce)
The maturity of all components is also a factor:
The current merchandising toolset available within Vertex AI Search for Commerce has been described as "fairly limited." It is noted that Google is still in the process of developing and releasing new tools for this area. Retailers with sophisticated merchandising needs might find the current offerings less comprehensive than desired.  
The rapid evolution of platforms like Vertex AI Search, while bringing cutting-edge features, can also introduce challenges. Recent user reports, such as the significant change in how JSON data queries are handled in the "latest version" as of May 2025, and other unexpected behaviors , illustrate this point. Vertex AI Search is part of a dynamic AI landscape, with Google frequently rolling out updates and integrating new models like Gemini. While this pace of innovation is a key strength, it can also lead to modifications in existing functionalities or, occasionally, introduce temporary instabilities. Users, especially those with established applications built upon specific, previously observed behaviors of the platform, may find themselves needing to adapt their implementations swiftly when such changes occur. The JSON query issue serves as a prime example of a change that could be disruptive for some users. Consequently, organizations adopting Vertex AI Search, particularly for mission-critical applications, should establish robust processes for monitoring platform updates, thoroughly testing changes in staging or development environments, and adapting their code or configurations as required. This highlights an inherent trade-off: gaining access to state-of-the-art AI features comes with the responsibility of managing the impacts of a fast-moving and evolving platform. It also underscores the critical importance of comprehensive documentation and clear, proactive communication from Google regarding any changes in platform behavior.  
Moreover, there can be a discrepancy between the marketed ease-of-use and the actual complexity encountered during real-world implementation, especially for specific or advanced scenarios. While Vertex AI Search is promoted for its straightforward setup and out-of-the-box functionalities , detailed user experiences, such as those documented in and , reveal significant challenges. These can include managing the costs of components like Vector Search, dealing with limitations in supported file types or embedding dimensions, navigating the intricacies of IAM permissions, and achieving highly specific filtering requirements (e.g., querying by a custom document_id). The user in , for example, was attempting to implement a relatively complex use case involving 500GB of documents, specific ID-based querying, multi-year conversational history, and real-time data ingestion. This suggests that while basic setup might indeed be simple, implementing advanced or highly tailored enterprise requirements can unearth complexities and limitations not immediately apparent from high-level descriptions. The "out-of-the-box" solution may necessitate considerable workarounds (such as using metadata for ID-based filtering ) or encounter hard limitations for particular needs. Therefore, prospective users should conduct thorough proof-of-concept projects tailored to their specific, complex use cases. This is essential to validate that Vertex AI Search and its constituent components, like Vector Search, can adequately meet their technical requirements and align with their cost constraints. Marketing claims of simplicity need to be balanced with a realistic assessment of the effort and expertise required for sophisticated deployments. This also points to a continuous need for more detailed best practices, advanced troubleshooting guides, and transparent documentation from Google for these complex scenarios.  
9. Recent Developments and Future Outlook
Vertex AI Search is a rapidly evolving platform, with Google Cloud continuously integrating its latest AI research and model advancements. Recent developments, particularly highlighted during events like Google I/O and Google Cloud Next 2025, indicate a clear trajectory towards more powerful, integrated, and agentic AI capabilities.
Integration with Latest AI Models (Gemini)
A significant thrust in recent developments is the deepening integration of Vertex AI Search with Google's flagship Gemini models. These models are multimodal, capable of understanding and processing information from various formats (text, images, audio, video, code), and possess advanced reasoning and generation capabilities.  
The Gemini 2.5 model, for example, is slated to be incorporated into Google Search for features like AI Mode and AI Overviews in the U.S. market. This often signals broader availability within Vertex AI for enterprise use cases.  
Within the Vertex AI Agent Builder, Gemini can be utilized to enhance agent responses with information retrieved from Google Search, while Vertex AI Search (with its RAG capabilities) facilitates the seamless integration of enterprise-specific data to ground these advanced models.  
Developers have access to Gemini models through Vertex AI Studio and the Model Garden, allowing for experimentation, fine-tuning, and deployment tailored to specific application needs.  
Platform Enhancements (from Google I/O & Cloud Next 2025)
Key announcements from recent Google events underscore the expansion of the Vertex AI platform, which directly benefits Vertex AI Search:
Vertex AI Agent Builder: This initiative consolidates a suite of tools designed to help developers create enterprise-ready generative AI experiences, applications, and intelligent agents. Vertex AI Search plays a crucial role in this builder by providing the essential data grounding capabilities. The Agent Builder supports the creation of codeless conversational agents and facilitates low-code AI application development.  
Expanded Model Garden: The Model Garden within Vertex AI now offers access to an extensive library of over 200 models. This includes Google's proprietary models (like Gemini and Imagen), models from third-party providers (such as Anthropic's Claude), and popular open-source models (including Gemma and Llama 3.2). This wide selection provides developers with greater flexibility in choosing the optimal model for diverse use cases.  
Multi-agent Ecosystem: Google Cloud is fostering the development of collaborative AI agents with new tools such as the Agent Development Kit (ADK) and the Agent2Agent (A2A) protocol.  
Generative Media Suite: Vertex AI is distinguishing itself by offering a comprehensive suite of generative media models. This includes models for video generation (Veo), image generation (Imagen), speech synthesis, and, with the addition of Lyria, music generation.  
AI Hypercomputer: This revolutionary supercomputing architecture is designed to simplify AI deployment, significantly boost performance, and optimize costs for training and serving large-scale AI models. Services like Vertex AI are built upon and benefit from these infrastructure advancements.  
Performance and Usability Improvements
Google continues to refine the performance and usability of Vertex AI components:
Vector Search Indexing Latency: A notable improvement is the significant reduction in indexing latency for Vector Search, particularly for smaller datasets. This process, which previously could take hours, has been brought down to minutes.  
No-Code Index Deployment for Vector Search: To lower the barrier to entry for using vector databases, developers can now create and deploy Vector Search indexes without needing to write code.  
Emerging Trends and Future Capabilities
The future direction of Vertex AI Search and related AI services points towards increasingly sophisticated and autonomous capabilities:
Agentic Capabilities: Google is actively working on infusing more autonomous, agent-like functionalities into its AI offerings. Project Mariner's "computer use" capabilities are being integrated into the Gemini API and Vertex AI. Furthermore, AI Mode in Google Search Labs is set to gain agentic capabilities for handling tasks such as booking event tickets and making restaurant reservations.  
Deep Research and Live Interaction: For Google Search's AI Mode, "Deep Search" is being introduced in Labs to provide more thorough and comprehensive responses to complex queries. Additionally, "Search Live," stemming from Project Astra, will enable real-time, camera-based conversational interactions with Search.  
Data Analysis and Visualization: Future enhancements to AI Mode in Labs include the ability to analyze complex datasets and automatically create custom graphics and visualizations to bring the data to life, initially focusing on sports and finance queries.  
Thought Summaries: An upcoming feature for Gemini 2.5 Pro and Flash, available in the Gemini API and Vertex AI, is "thought summaries." This will organize the model's raw internal "thoughts" or processing steps into a clear, structured format with headers, key details, and information about model actions, such as when it utilizes external tools.  
The consistent emphasis on integrating advanced multimodal models like Gemini , coupled with the strategic development of the Vertex AI Agent Builder and the introduction of "agentic capabilities" , suggests a significant evolution for Vertex AI Search. While RAG primarily focuses on retrieving information to ground LLMs, these newer developments point towards enabling these LLMs (often operating within an agentic framework) to perform more complex tasks, reason more deeply about the retrieved information, and even initiate actions based on that information. The planned inclusion of "thought summaries" further reinforces this direction by providing transparency into the model's reasoning process. This trajectory indicates that Vertex AI Search is moving beyond being a simple information retrieval system. It is increasingly positioned as a critical component that feeds and grounds more sophisticated AI reasoning processes within enterprise-specific agents and applications. The search capability, therefore, becomes the trusted and factual data interface upon which these advanced AI models can operate more reliably and effectively. This positions Vertex AI Search as a fundamental enabler for the next generation of enterprise AI, which will likely be characterized by more autonomous, intelligent agents capable of complex problem-solving and task execution. The quality, comprehensiveness, and freshness of the data indexed by Vertex AI Search will, therefore, directly and critically impact the performance and reliability of these future intelligent systems.  
Furthermore, there is a discernible pattern of advanced AI features, initially tested and rolled out in Google's consumer-facing products, eventually trickling into its enterprise offerings. Many of the new AI features announced for Google Search (the consumer product) at events like I/O 2025—such as AI Mode, Deep Search, Search Live, and agentic capabilities for shopping or reservations —often rely on underlying technologies or paradigms that also find their way into Vertex AI for enterprise clients. Google has a well-established history of leveraging its innovations in consumer AI (like its core search algorithms and natural language processing breakthroughs) as the foundation for its enterprise cloud services. The Gemini family of models, for instance, powers both consumer experiences and enterprise solutions available through Vertex AI. This suggests that innovations and user experience paradigms that are validated and refined at the massive scale of Google's consumer products are likely to be adapted and integrated into Vertex AI Search and related enterprise AI tools. This allows enterprises to benefit from cutting-edge AI capabilities that have been battle-tested in high-volume environments. Consequently, enterprises can anticipate that user expectations for search and AI interaction within their own applications will be increasingly shaped by these advanced consumer experiences. Vertex AI Search, by incorporating these underlying technologies, helps businesses meet these rising expectations. However, this also implies that the pace of change in enterprise tools might be influenced by the rapid innovation cycle of consumer AI, once again underscoring the need for organizational adaptability and readiness to manage platform evolution.  
10. Conclusion and Strategic Recommendations
Vertex AI Search stands as a powerful and strategic offering from Google Cloud, designed to bring Google-quality search and cutting-edge generative AI capabilities to enterprises. Its ability to leverage an organization's own data for grounding large language models, coupled with its integration into the broader Vertex AI ecosystem, positions it as a transformative tool for businesses seeking to unlock greater value from their information assets and build next-generation AI applications.
Summary of Key Benefits and Differentiators
Vertex AI Search offers several compelling advantages:
Leveraging Google's AI Prowess: It is built on Google's decades of experience in search, natural language processing, and AI, promising high relevance and sophisticated understanding of user intent.
Powerful Out-of-the-Box RAG: Simplifies the complex process of building Retrieval Augmented Generation systems, enabling more accurate, reliable, and contextually relevant generative AI applications grounded in enterprise data.
Integration with Gemini and Vertex AI Ecosystem: Seamless access to Google's latest foundation models like Gemini and integration with a comprehensive suite of MLOps tools within Vertex AI provide a unified platform for AI development and deployment.
Industry-Specific Solutions: Tailored offerings for retail, media, and healthcare address unique industry needs, accelerating time-to-value.
Robust Security and Compliance: Enterprise-grade security features and adherence to industry compliance standards provide a trusted environment for sensitive data.
Continuous Innovation: Rapid incorporation of Google's latest AI research ensures the platform remains at the forefront of AI-powered search technology.
Guidance on When Vertex AI Search is a Suitable Choice
Vertex AI Search is particularly well-suited for organizations with the following objectives and characteristics:
Enterprises aiming to build sophisticated, AI-powered search applications that operate over their proprietary structured and unstructured data.
Businesses looking to implement reliable RAG systems to ground their generative AI applications, reduce LLM hallucinations, and ensure responses are based on factual company information.
Companies in the retail, media, and healthcare sectors that can benefit from specialized, pre-tuned search and recommendation solutions.
Organizations already invested in the Google Cloud Platform ecosystem, seeking seamless integration and a unified AI/ML environment.
Businesses that require scalable, enterprise-grade search capabilities incorporating advanced features like vector search, semantic understanding, and conversational AI.
Strategic Considerations for Adoption and Implementation
To maximize the benefits and mitigate potential challenges of adopting Vertex AI Search, organizations should consider the following:
Thorough Proof-of-Concept (PoC) for Complex Use Cases: Given that advanced or highly specific scenarios may encounter limitations or complexities not immediately apparent , conducting rigorous PoC testing tailored to these unique requirements is crucial before full-scale deployment.  
Detailed Cost Modeling: The granular pricing model, which includes charges for queries, data storage, generative AI processing, and potentially always-on resources for components like Vector Search , necessitates careful and detailed cost forecasting. Utilize Google Cloud's pricing calculator and monitor usage closely.  
Prioritize Data Governance and IAM: Due to the platform's ability to access and index vast amounts of enterprise data, investing in meticulous planning and implementation of data governance policies and IAM configurations is paramount. This ensures data security, privacy, and compliance.  
Develop Team Skills and Foster Adaptability: While Vertex AI Search is designed for ease of use in many aspects, advanced customization, troubleshooting, or managing the impact of its rapid evolution may require specialized skills within the implementation team. The platform is constantly changing, so a culture of continuous learning and adaptability is beneficial.  
Consider a Phased Approach: Organizations can begin by leveraging Vertex AI Search to improve existing search functionalities, gaining early wins and familiarity. Subsequently, they can progressively adopt more advanced AI features like RAG and conversational AI as their internal AI maturity and comfort levels grow.
Monitor and Maintain Data Quality: The performance of Vertex AI Search, especially its industry-specific solutions like Vertex AI Search for Commerce, is highly dependent on the quality and volume of the input data. Establish processes for monitoring and maintaining data quality.  
Final Thoughts on Future Trajectory
Vertex AI Search is on a clear path to becoming more than just an enterprise search tool. Its deepening integration with advanced AI models like Gemini, its role within the Vertex AI Agent Builder, and the emergence of agentic capabilities suggest its evolution into a core "reasoning engine" for enterprise AI. It is well-positioned to serve as a fundamental data grounding and contextualization layer for a new generation of intelligent applications and autonomous agents. As Google continues to infuse its latest AI research and model innovations into the platform, Vertex AI Search will likely remain a key enabler for businesses aiming to harness the full potential of their data in the AI era.
The platform's design, offering a spectrum of capabilities from enhancing basic website search to enabling complex RAG systems and supporting future agentic functionalities , allows organizations to engage with it at various levels of AI readiness. This characteristic positions Vertex AI Search as a potential catalyst for an organization's overall AI maturity journey. Companies can embark on this journey by addressing tangible, lower-risk search improvement needs and then, using the same underlying platform, progressively explore and implement more advanced AI applications. This iterative approach can help build internal confidence, develop requisite skills, and demonstrate value incrementally. In this sense, Vertex AI Search can be viewed not merely as a software product but as a strategic platform that facilitates an organization's AI transformation. By providing an accessible yet powerful and evolving solution, Google encourages deeper and more sustained engagement with its comprehensive AI ecosystem, fostering long-term customer relationships and driving broader adoption of its cloud services. The ultimate success of this approach will hinge on Google's continued commitment to providing clear guidance, robust support, predictable platform evolution, and transparent communication with its users.
2 notes · View notes