herval
herval
hervalicio.us
78 posts
Software, Hardware, Peopleware
Don't wanna be here? Send us removal request.
herval · 8 days ago
Text
AI interns have no memory
A common cognitive effect that people fail to consider when they vibe code apps is that you’re effectively outsourcing your thinking as you do it.
Which means you’re not going to remember what got written where anywhere near as much as you would if you typed the code yourself. So you get frustrated, dismiss AI assistance altogether or pull yourself further into a big messy codebase, one order at a time.
If you stop and think about it, there’s another situation where you also outsource the execution and have to find ways to reconcile the work with someone else: when you’re working with *other people*.
When asking others to solve something for you, you don’t expect them to read your mind - vague requests will get you vague results. You don’t expect other people to write code the same way you would either - unless you present them with a strict set of rules. You don’t expect other people to guess what you want - you explain the important parts.
In fact, the best mental model I formulated to date, which seems to work great with models like Claude, is you gotta treat the AI as a hypermotivated and eager to please junior engineer with perfect long term memory and an A+ understanding of CS fundamentals, but also incurable short term amnesia and very little experience reading code before they start firing out solutions.
So if you’re constantly finding yourself frustrated with AI assistants, it might not be because the tool is limited - you’re probably simply approaching it as a form of Google replacement, instead of treating it as it was trained to act.
Tumblr media
0 notes
herval · 14 days ago
Text
Vibe Coding experiment log, July 2025
Tumblr media
The Socialmediaverse is full of people claiming that AI models "10x your productivity" and hordes of ancient engineers holding on to any evidence they can find to justify their views that "AI is completely useless". It's quite evident both are misguided takes - while there's no magic bullet that 10xes productivity on anything, dismissing the biggest shift in experience since the invention of the GUI is quite the self-limiting move.
So one of these weekends, I decided to try to "Vibe Code" something and see how far I could get. Mind you, it's been over two decades since I last shipped a native desktop application from scratch. In that period, I worked with different stacks, from Electron to QT, but generally stayed away from native app development since I remember the hassle of dealing with weird compilation errors, arcane XCode configurations and random process steps (such as app notarization) that would just take DAYS of my life. So I never bothered.
Somehow being able to refocus my limited human attention span into how the app should behave and feel (both the code and the UI), instead of getting constant interruptions to debug platform details, just feels better. Being able to conjure a magical expert in the sky to understand why code signing was failing with code 129 then eyeballing that it was because the expected identifier on a subpanel of a subpanel of app settings was supposed to be 5F9753512E1 instead of 98C66008F9010 (of course!) feels like a superpower.
On the experience
In this app, I utilized mostly Claude Code and Windsurf (with a mix of O3 and GPT4.1 models). Here I captured my key takeaways from this process, as of July 2025. I'm looking forward to see what that looks like in a year's time, and compare notes with my past self and his AI companions!
AI assistance made me radically better with things I had some level of understanding
I have a reasonable understanding of how the SwiftUI & AppKit frameworks work, so in my requests for the AI, I was able to correct its detours and catch accidental duplications quite easily.
That would not be possible if I didn't understand what to expect: I recently tried to vibecode a Rust application, and it quickly devolved into a mess of functions that would just not compile anymore, and I couldn't understand half of it. It's the pit of doom vibecoders without an engineering background or junior engineers seem to fall into all the time - and it makes sense.
This lines up with my expectation that LLMs can serve as excellent "Power Tools for the Mind" if applied well (and confirms the experience I've been having with AI assistance on my day job).
Finding the Goldilocks prompt
There's a sweet spot of complexity per iteration that seems to yield the best results. What I found is that by giving the AI a mix of specific references ("I want you to add a function that does X here") and aiming for commit-sized chunks of a few hundreds of LoC maximum, focused on one single conceptual change, led to a workflow where I rarely had to correct the model (in particular Claude Code). This kind of intuition will probably vary between models (I feel like I can't delegate anything that touches more than one file at a time to GPT 4.1 for instance). Not trying to assume it remembers context also helps (in the events I saw it taking a detour, such as recreating the same code as before, stopping the model and nudging it with sentences like "reread the codebase, you already did this" led it back on track).
It's also quite hilarious how they will gladly completely rewrite each other's code to "better understand", just like human coders would, so this "baby steps" technique acts as a reasonable mitigation for the kind of "rewrite the world" panic you'd see with careless YOLO prompting.
Some people are starting to call this way of working "Structured Wishing" and there's plenty of initiatives aimed at formalizing development lifecycles and tools in a way that you can "declare" things better, such as Amazon Kiro's "Specs".
In a way, we're trying to apply to LLMs what we tried, as an industry, to apply to individuals for the longest time - enforce workflows and lifecycles, as an attempt to get everyone to a similar level of productivity and consistency. I'm not convinced any of the current techniques will work particularly well (all forms of documentation tend to run out of sync with reality immediately), so it remains to be seen whether any specific technique will actually improve accuracy & quality.
Until then, my workflow is playing "code golf" with the LLM. It just works.
On brain rot and memorization
If I could summarize my experience in this project, I'd say it seems to be working since I'm acting as an overzealous co-author and "editor": even though I wrote a tiny number of lines of code by hand (mostly small edits), I read through every single line the LLM generated and tried them religiously. This not only allowed me to memorize where things are (although I suspect in a much more superficial way that I'd memorize lines that I actively wrote), but led to its own flavor of exhaustion - I find myself consistently exhausted at the end of the day, and I suspect a big part of it is due to reading so much more code than I'd normally do while actively typing).
While writing this, I found that there's active research starting to show on this space and it actually confirms my impression: there's early evidence that completely delegating the work to an LLM and not actively collaborating on it (either by writing + using the LLM as an editor or co-writing + focusing on the editing) leads to a catastrophic loss of memorization, which in the context of coding, quickly leads to unmaintainable slop. And you won't even remember any of it.
youtube
I suspect the majority of the issues and blowback we'll see around "Vibe Coding" in the coming months will be due to this form of misuse, where you blindly trust the AI to do something you don't even understand.
On the product I shipped and its implications
Tumblr media
As a heavy user of clipboard manager apps, I decided to code up my own to address some shortcomings with the one I currently use. In particular, the app I currently pay for doesn't show the full content of the copied item (and since I constantly copy-paste somewhat large chunks of text/code, it gets quite annoying).
I think the end result is quite decent (I'm in fact using it daily now, instead of the old app I used to pay for), so I decided to share it - code and all here.
Skeptics will say it's "too simple" (and it definitely is!). But here's the thing: most applications are as simple (or simpler). Which leads to the second part of my rambling.
I obviously don't expect this app to make me a million dollars, nor to change the world of Mac Clipboard Managers. But this process did highlight a couple of points that were quite crucial to me and how I view the future of SaaS or Indie app development.
"Micro SaaS" is doomed
While we're not there yet if you're not a technical person, we're fast approaching a world where *anyone will be able to create apps, tailor made for their exact scenario or needs, and the cost of it is a couple of minutes of typing instructions**.
This means the marginal value of software (which is already low, thanks to the Internet's main business model being that of free software distribution backed by advertising) will drop to zero.
Even if you crack distribution and have access to a huge potential customer base, you’ll essentially start having to compete with your every customer’s ability to clone your app, ever faster, for “free”.
This is a particularly gloomy prospect for Indies and micro SaaS developers, who have stuck to the long tail with simple apps and managed to make millions of dollars in revenue. I'd imagine some of those are already feeling the dip: you can already conjure apps such as stage timers with a single prompt. I was able to ship this Clipboard Manager (from idea to app submission to the App Store) in less than 6 hours.
Now, I don't expect "small software" to disappear - there'll always be room for customer loyalty in one way and another and some people manage to sell software to their fanbase simply because it was "made by" them (you know who I'm talking about). But the pie will certainly be split by an order of magnitude more entrants, which will drown out most Indies below the sustainability line.
Polish and perfection will make it harder and harder to start (or be seen)
The good old days of the Web 2.0 or early mobile industry, with its quirky sites and novel small apps is firmly behind us. Any niche you can possibly think about is already filled with polished and sleek entrants, which somewhat negates the benefit of "shipping faster".
Launching a product in 2025 is also an exercise in patience. From cookie warnings to KYC laws that vary from state to state, anything you put out requires an ever increasing amount of registrations, disclaimers, licenses and all sorts of red tape, which makes it incredibly hard to innovate.
The good news is there's a big opportunity here for AIs that can help clear these hurdles and again level the playing field.
The attention economy will become even more saturated
The average adult's attention span is fragmented to a point where our brains are becoming desensitized to new options. Quickly built AI Applications will add to this big bucket of noise, which will further raise the bar for getting noticed.
Although there's a short-term trend happening right now in the industry where "shock-value" seems to capture people's attention, this too will soon become saturated, with a horde of me-toos brandishing unscrupulous slogans ("cheat on everything!") in an attempt to get seen, the reality is that most people will simply laugh and move on.
What remains defensible?
Not everything will be commoditized equally and at the same speed, of course. The usual "moats" that benefit large tech companies and make them practically unbeatable remains relevant, even in a world where vibe coding drives the cost of zero-to-one to zero:
Network effect is more important than ever - speed of execution rarely takes users away from systems they're deeply tied to (as Twitter clearly demonstrates).
Software that required complex certifications or regulatory compliance will remain challenging to build - typing code faster isn't enough to compete.
Trust might become more important than ever, in particular as more and more half-baked applications flood everything.
Good marketing is still unbeatable. Although AI tools are exceptionally good at producing slop fast, even the least attentive individual can already easily identify "GPT style" slop - and dismiss it as quickly as it was produced.
Parting thoughts
After six hours of vibe coding, I find myself using my own clipboard manager daily instead of the paid app I've relied on for years. On a personal level, this opens up a fun world of possibilities, and I'm already excited about a few ideas of little apps and productivity hacks I always wish I had.
The fact that a non-expert is able to ramp up and ship something usable, on any platform, in an afternoon, is also an exciting prospect as a consumer. An explosion of new apps is a welcoming (if overwhelming) prospect, after years of same-old products due to technical difficulty alone.
Whether this shift will bring about a golden age of indie development, a bunch of truly original new ideas or create an infinite sea of undifferentiated Slop Apps, it remains to be seen.
Or, as ChatGPT would say after reading...
my weekend dive into “Vibe Coding” wasn’t just about testing whether AI can truly accelerate app development—it opened my eyes to a deeper shift that’s underway in software creation. While AI assistance clearly elevated my productivity and reduced friction in navigating technical hurdles, it wasn’t magic: it amplified my existing skills rather than replacing the need for genuine understanding. This experiment also illuminated broader implications for software as a product, from the impending saturation of micro SaaS markets to increasing pressures for perfection and polish just to be noticed. If anything, the true power of AI lies not in generating lines of code effortlessly, but in reshaping our roles as creators—shifting us from builders bogged down by minutiae into focused architects of intentional user experiences. As these tools mature, success won’t come from blindly trusting AI, but rather from mastering the collaborative dance between human intuition and artificial intelligence.
Thanks for the slop, GPT!
Now I wonder if I can ship a full game next...
0 notes
herval · 4 months ago
Text
On fragmented memories and lost time
Ever since the Pandemic ended, I've been noticing how it's becoming more and more difficult to meet up with people (be it online or in person). It feels like everyone is suddenly unavailable.
Everyone I know is aksi constantly complaining about passing by faster and faster, but nobody seems to have much in the way of news or change or anything out of the ordinary happening to justify that perception. More than a couple of friends recently defined their current day to day as "stuck in a Limbo", but couldn't quite explain why.
Having recently worked on a couple of B2C products, I've also noticed how hard it is to capture people's attention now - even for a product _they know they need_. I thought perhaps all problems that need solving are already figured out, hence the global apathy.
I never connected the dots on these seemingly unrelated points until now, but it turns out what we're all going through has a name: Fragmented Attention [1].
Our brains evolved to store episodic memories of what they see as relevant - that's why we forget about routine, but have vivid memories of situations that happened a decade ago. The key factor that is eroding our minds is that the content we consume got so well optimized for quick engagement (TikTok videos are probably the best example), it now reached a point where it's long enough to activate our interest, but too short to allow the brain to even _consider forming memories_.
Not only that, chronic "atomized content" consumption also leads to narrow and reduced critical thinking abilities [2]. Explains a lot, doesn't it?
So if you're reading this, stop whatever you're doing and try this:
1. open your phone and find the screen time tracking app
2. count how many hours you spent this week across social media and messaging apps
Chances are you'll find that you're spending _at least_ a consistent 2-3 hours per day in a mental limbo, jumping from short content to short content, not absorbing or learning.
And you won't be able to recall a single thing.
(not even this post)
--
[1] https://lnkd.in/ds9EW6hR
[2] https://lnkd.in/d752TFTn
0 notes
herval · 5 months ago
Text
Is “Vibe Ops” the natural evolution of software operations?
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. - Andrej Karpathy - Feb 2, 2025
As I spent 5 more hours chasing a bug at work today, I got to the sudden realization that, for more than a decade, I've been spending half my waking hours doing something that AIs are likely to be way more faster and far more competent than I'll ever be: operating software. Reading logs. Correlating metrics. Root causing issues.
No matter where you fall on the engineering expertise spectrum, it's a fact that LLMs are enabling more and more people, coming from any kind of background, to create software.
Although the actual software experts are quick to make fun of the fuck-ups, it's pretty clear, to anyone paying attention, that this shift will not go away: models will get smarter (and cheaper, and run on your laptop), utilization patterns will emerge (vibecamps anyone?), tools will get smarter. More and more people will be able to ship software and reach millions of users - fast. It's a Pandora’s Box moment, no matter how much us, “actual software engineers”, yell and scream and act in disbelief.
There's another side to this that coin nobody is talking about, however, which may have huge implications for anyone - vibecoder or not: how are we going to operate all that?
I believe the natural evolution of the software operations & reliability discipline will soon be all about employing autonomous agents that can understand your infrastructure and help you managing systems by purely expressing intent through conversational interactions.
Forget manual configuration, scripting, or precise commands - just say what you feel needs to happen (“scale the DB up a bit”, “alert me if vibes are off”), and let the software handle it.
For the lack of a better name, I'll call it “Vibe Ops ™️”.
0 notes
herval · 5 months ago
Text
On tech hiring in the year 2025
It's a very weird moment for hiring remotely in tech.
The first hurdle is literally getting human CVs in front of you: any role you open on Linkedin gets immediately filled out with hundreds of applicants, most of which are recruiting agencies or weirdly empty profiles. The vast majority (including the supposedly human applicants) don't even match the job description.
Then comes the "motivation" part - which used to be solved with "cover letters" in the past. I haven't seen a single one that's not clearly AI-generated slop in a long, long time. Bonus points for the dude who sent a letter that was clearly meant for a different company. Honest mistake, I suppose!
If after wading through 700 CVs you end up finding someone that looks human, comes the part where you actually talk to them.
Finally, the evaluation part. coding problems just plain don't work anymore: you have people who got good at memorizing them in one side (which is an old problem - you're just gauging how well people memorize stuff), and then the literal horde of people who are very clearly trying to use AI during the interview.
A recent candidate expressed their disappointment when I didn't ask him to share his screen before the coding problem. He was clearly repeating everything I asked out loud, looking to a specific corner of the screen and reading back responses after a few seconds. I guess he had his phone glued on the screen, or some other setup that wouldn't show if we did a screen sharing session.
Take-home exercises, obviously, don't work either. Some candidates don't even try to pretend they wrote the code, during a face-to-face follow-up conversation. I asked a candidate to change the color of a button in the 2-file code he wrote. He could not find the button.
To be fair, none of this would be an issue if AI assistants were at a state where developers can be swapped with mere prompters. And hiring in tech has always been a mess, with random hoops that don't really evaluate much and tons of false-negatives.
Work-to-hire is also tough. It's entirely possible that a candidate could be able to spew out passable code for their first week/month at a job. But what happens when then they inevitably hit a pothole where the AI assistants they use are unable to fix?
This is all, of course, terrible for candidates as well. I know many amazing engineers who simply cannot get an interview. Between ATS prefiltering candidates with AI and the waves of spam on every role, they're probably not even being seen by the hiring managers on the roles they're applying to. I know more than one case where candidates could only get an interview after rewriting their CV with ChatGPT/Claude. Which just adds to the hallucinatory slop.
We're now at a place where any hire is essentially a coin toss, rendering most conventional interview processes essentially useless.
How do we get out of this mess?
xposted from linkedin
0 notes
herval · 9 years ago
Text
Building a distributed ball of mud
The "big ball of mud" is one of the most common antipatterns in software:
A Big Ball of Mud is a haphazardly structured, sprawling, sloppy, duct-tape-and-baling-wire, spaghetti-code jungle. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition. Programmers with a shred of architectural sensibility shun these quagmires. Only those who are unconcerned about architecture, and, perhaps, are comfortable with the inertia of the day-to-day chore of patching the holes in these failing dikes, are content to work on such systems.
As startups scale, a pattern that seems to emerge every single time is to get away from the monolith by breaking it down into microservices:
someone (maybe an architecture astronaut, a comitee or a seagull manager that just returned from an "leading industry conference") decides breaking the monolith is the way to go
code is planned to be carved out in pieces of more or less the same "size", divided (usually) by functionality or (even more troubling) by "business entity" - a specially common kind of misconception when migrating from a model-centric framework such as Rails
as the "migration" proceeds, all the obvious pitfalls show up: monitoring is hard. Deployment is much harder. Logging is impossible. Transactions aren't even a possibility. Every single microservice has the potential to bring down the entire system.
half the time, the entire thing keeps the same database, leading to an even harder to refactor case of Database Integration.
You end up with a big ball of mud - except it sprawls across the network, requires even more duct-tape and the spaghetti now times out regularly due to unexpected I/O issues.
We replaced our monolith with micro services so that every outage could be more like a murder mystery.
— Honest Status Page (@honest_update) October 7, 2015
So you're saying I should never write microservices?
No. My sole objective by writing this (if any, other than flushing it out of my head) is to try to appeal to your rational side.
Humans are very good at concluding the wrong things - something we do mostly in a subconscious level - which is why stopping and really assessing the why has to be taken as a daily exercise. A typical example is again the "Rails monolith is bad" meme: you pick a stack in which I/O is slow and a dynamically typed language, then hastily conclude (when the team scales to thousands of developers) that micro services are the way because any monolith or less granular split will be equally bad. Then rewrite the whole thing into micro services using the same language (sometimes even the same framework) and complain things aren't really improving much.
We software folk also have this annoying tendency to try and classify every. Single. Thing. We dislike inconsistency to a fault: it's not ok to have exceptional cases - eg an architecture that has services and a reasonably managed monolith, or services of different sizes. This is specially aggravated by the "micro" preaching, and leads to solutions that are coarse to the point of irresponsibility: real-life examples include banking systems where the withdraw and deposit routines are not transactional, and the one reason why is "because we broke that down into micro services".
There's plenty of valid reasons to write services - these have been discussed ad nauseam everywhere. Just to enumerate a few:
You have some job that executes asynchronously based on an event queue or maybe called in a fire-and-forget manner.
You have a distributed (or otherwise silo'd) team. Having at least one big service façade is one of the most common ways to improve collaboration/productivity.
A specific subsystem would really benefit from a different tech stack.
You're building an entire new API that's completely independent from the existing monolith save for very specific points of collaboration - for instance a recommender, search engine, etc
It's also worth noting that you can have micro-deployments without necessarily breaking the code into different codebases - an approach that people tend to simply forget, and leads to huge incompatibility pains as different service owners implement their own libraries/come up with shared libraries. So if your team maintains, say, a public API façade, a protobuf service and some sort of event listener, there's usually no reason (other than pure OCD) to have these as separate codebases - or even to have them depend on each other on an RPC level (you can still deploy the 3 things as separate "processes" and scale them independently).
Help, I'm knee-deep into microservices and they seem to be breeding like rabbits!
There seem to be some universal law somewhere stating that every organization must evolve to have at least One Service per Developer.
It's a natural consequence of different point of view: as requirements (and specially team members) change, perceptions change - and so do abstractions. So in the end, there's bound to be a case where someone claims that "A and B should be separate things" - then bam! New service, new RPC calls, new monitoring, new breaking point.
Some classic symptoms of excessive granularity are good indicators you're building a (distributed) ball of mud:
Services are always scaled/deployed in lockstep. If you can never change A without changing B (be it due to constant interface incompatibilities or something nastier such as business logic spread between service layers), is there really any benefit on the split?
Services always call each other synchronously, for every single transaction/operation. If A is basically a proxy to B, does it really make sense to have two services?
Services depend on each other with no graceful degradation of any kind. That's a specially common pattern on monolith breakdowns. If everything is in the critical path, distributing things just makes it harder to deploy, debug and scale.
Multiple services that operate on the same context are developed and maintained by the same set of people. Does it really make sense to have a team of 5 developers maintaining 5 services (and their respective monitoring, deploy scripts, etc) that are only really consumed by their own other pieces?
Services share context they don't own. This is also known as "bounded context" and might sound counterintuitive (I've personally advocated against the need of that many times before) - if I have a "User" protobuff object, why not just passing it around everywhere?
Services have to necessarily be developed in the same programming language, using the same framework and the same shared libraries. Unless you're into a huge organization (where perhaps settling on a single stack makes sense... maybe), enforcing a framework on all services is a big sign you're building things that should really be libraries/engines/packages.
Obviously, picking the wrong abstraction can also be problematic on an intra-process/monolithic codebase - the most important question to keep in mind is whether breaking your problem down into distributed objects really makes the problem easier to solve.
In conclusion (or lack thereof)
Microservices are no panacea. That much should be clear on everyone's heads by now.
Every time you feel the urge to adopt the latest and greatest "tech radar trend", consider what problem you're actually trying to solve. Maybe all you need a better scaling method. Maybe work to improve the team communication. Maybe you really need a new stack for a specific part of your beautiful monolith. The right choice almost always is the simpler one.
Thinking is cheap. Let's do more thinking.
0 notes
herval · 9 years ago
Photo
Tumblr media
“Resolving technical debt”
Accumulated technical debt is visible in the middle image. Right image after refactoring.
About 1930 Elías García Martínez, refactored by Cecilia Giménez
Fresco
Composition image CC-BY 2.0 by cea+ in Flickr https://www.flickr.com/photos/33255628@N00/7923536516
(collaboration Tero Kinnunen)
222 notes · View notes
herval · 9 years ago
Photo
Tumblr media
Hieronymus Bosch “A visual guide to the Scala language”  oil on oak panels, 1490-1510
The left panel shows the functional features, the main one describes the type system, and the right the object oriented parts 
202 notes · View notes
herval · 9 years ago
Text
WIP Cover Letter
Dear recruiter who just spammed me (the email says {FirstName}, but I assume it’s directed to me) about a position of PHP hands-on CTO on the fastest growing startup of all times,
I know you must be very busy with individual communications with the small handful of experienced PHP hands-on CTOs in our competitive little industry, so I’ll save your time with a short cover letter before we jump on that quick phone call:
Dear sir/madam person,
I’m a specialist in Machine Learning, having spent the last 15 years desperately trying to teach machines to do what I want. So far they’ve been less than collaborative.
I’m also a Big Data expert - Huge Data even - having thousands of man-hours of experience waiting fo queries to complete. I specialize in predicting the future using reactive data streaming systems with multidimensional algebra capabilities. My favorite tool is MySQL. I did a left outer join once and felt pretty good about it.
I’m also an accomplished webmaster, having managed my own web-scale cloud computing servers by employing Immutable Infrastructure[1] techniques such as bash[2] scripts with an immutable lists of IP addresses and Cryptographically Encoded Networking Protocols on the 7th layer of the OSI model to ensure secure operations over an unsecured network. My sources of inspiration include the infamous Web Developer that invented the HTTP status code “418 I’m a Teapot” (Tim Berners-Lee) and the inventor of Java Script, Sir James Gosling[3].
Speaking of Java Script, I have extensive experience building scalable, multi-threaded applications that will make efficient use of machine resources, such as optimizing for maximum RAM usage and scheduling 100% of a single CPU for serving dozens of users by leveraging The Event Loop to the benefit of all major stakeholders. I love the JVM and love coffee. I miss Java Script Applets dearly.
I’m a praticioner of TDD, which I firmly believe produces code that will pass 100% of the tests, almost all the time. I’m known for producing twice as much TDD by employing the industry standards of The Pair Programming™ and The Standing Desk™, indispensable tools for any master in the arts of coding. I have “strong beliefs, weakly held” - such as “if you don’t TDD you’re a clown, bro!”. Please don’t even try convincing me otherwise - you’re wrong.
I’m an experienced people manager and self-described visionnaire thought leader. I firmly believe in setting the vision by the means of copy pasting other people’s ideas, self-congratulation and employing logic to win arguments. My favorite logical constructs are The Strawman, Appeal to Authority, without which we’d live in a wild west of people disputing my authority and questioning the validity of my the vision. We don’t want that.
I also read Dale Carnegie’s How to Make Friends and Influence People a quarter of a dozen times. My, you look good today! Hire me.
I’d like to close this short missive by leaving a bunch of buzzwords that may or may not fit the exact profile you’re looking for, but will certainly make my humble experience feel on par with the levels of excellence in this vital and growing industry: Servlets, XML, Asp.net, Coldfusion, ECMAscript, CVS, Subversion, Uncle Bob, Object Oriented Design, AngularJs, Java Script, Wordpress Plugin Developer, Agile, Scrum, Kanban, Scrumbut, Scrumban, Scrum Alliance, SEO, Senior Ninja Hacker Rockstar, Martin Fowler.
Looking forward to hear back from you.
0 notes
herval · 10 years ago
Text
DeepDreaming in Docker
Google recently published a mind-blowing paper exploring what neural networks interpret when processing an image. The results are a dreamscape of hallucinations and patterns that pretty much answers the question on whether Androids dream on electric sheep.
My quick contribution: easy #deepdreams
You can feed the network and see the results over your own images using the IPython Notebook Google released too. I found setting up all the dependencies on a Mac particularly painful, so after some tinkering, I got it working on a Docker container, directly on the command line. All you need to do to process an image is run the container:
docker run -i -t -e INPUT=your_file.png -v /path/to/your/folder:/data herval/deepdream
The source is on Github, in case you want to venture around turning that into a SaaS or adding GPU support!
If you prefer running the original IPython Notebook, someone released another Docker container which works beautifully too - there's also an attempt to "dreamify" Youtube videos in the works. Looking forward for someone to combine this with the Oculus Rift and live-feed video!
Enjoy - and sweet #deepdreams
5 notes · View notes
herval · 10 years ago
Link
As in Maslow’s pyramid, each layer requires the previous one. It is useless for code that is charging the wrong customer to be readable.Code should be: • Correct: does the code do what it’s supposed...
31 notes · View notes
herval · 11 years ago
Photo
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
We go forward.
image
twitter | facebook
2M notes · View notes
herval · 12 years ago
Photo
Tumblr media
In the film Elysium the space station is rebooted using code taken directly from the Intel Architecture Software Developer’s Manual Volume 3: System Development [pdf].
443 notes · View notes
herval · 12 years ago
Text
SEO-friendly single-page apps in Rails
(originally posted here)
TL;DR version
To make all this process as simple as possible, a variation of the third approach (Rack middleware + Selenium Webdriver + no caching) is available here as a Gem. Drop it in your project, have the dependencies installed, and may the SEO gods bless you!
The whole story
Much has been said about how hard it is to build a single-page app that responds well to crawlers - most crawlers don't support javascript, which means all they get are blank screens, when crawling a web-app entirely assembled and rendered on the browser. Luckily, there are several approaches one can take to circumvent the lack of faith from certain crawlers - there's obviously no "one size fits all" approach, so let's take a minute to go through three of the most commonly used approaches, highlighting pros and cons of each one of them.
Render everything in two "modes" (the no script approach)
This strategy consists of rendering your app normally, BUT with pieces of "static" content already baked in (usually inside a <noscript><div> block). In other words, the client-side templates your app servers to be rendered will have at least some level of server-side logic on them - hopefully, very little.
Although it's a somewhat natural approach for a developer used to rendering templates and content on the server side, it leads to a scenario where everything has to be implemented twice - once in the javascript templates/views, once in the pages, making everything hard to maintain (and potentially out of sync) real quick. That Product detail view now includes a "featured review"? No problem, have its text rendered on the server-side (and returned by the JSON endpoint your client app will use to render it again in a Javascript template).
This DOES work well for mostly-static, single-content apps (eg.: blogs), where even the javascript app itself would benefit from having the content already pre-loaded (the javascript views/snippets/segments would effectively fiddle with blocks of content, instead of fetching them from the server).
It's worth noting that you should NOT rely on rendering just bits of content when serving pages to bots, as some of them state that they expect the full content of the page. It's also worth pointing that Google also the snapshots it takes when crawling to compose the tiny thumbnails you see on search results, so you want these to be as close to the real thing as possible - which just compounds on the maintenance issues of this approach.
The hash fragment approach
This technique is supported by Google bot alone (with limited support by some other minor search bots - Facebook's bot works too, for instance) and is explained in detail here.
In short, the process happens as follows: The search bot detects that there are hash parameters in your URL (eg.: www.example.com/something#!foo=bar) The search bot then makes a SECOND request to your server, passing a special parameter (_escaped_fragment_) back. Eg.: www.example.com/something?_escaped_fragment_=foo=bar - it's now up to your server-side implementation to return a static HTML representation of the page.
Notice that for pages that don't have a hash-bang on their URL (eg.: your site root), this also requires that you add a meta tag to your pages, allowing the bot to know that those pages are crawlable.
<meta name="fragment" content="!">
Notice the meta tag above is mandatory if your URLs don't use hash fragments (which is becoming the norm these days, due to the amazing adoption of html5 across browsers) - analogously, this is probably the only technique of these three that will work if you depend on hash-bang urls on your site (please don't!).
You still have to figure out a way to handle the escaped_fragment requests and render the content these are supposed to return (like the previous approach), but at least it takes that content away from the body of the pages served to regular users (reducing, but not eliminating, the duplication issue). This works somewhat well on sites which part of the content is dynamic - not so much on single-page apps. No universal search bot support is also an obvious downside. Plus, you still have to pick a strategy to render the content without Javascript when the _escaped_fragment_ request arrives. Which leads us directly to the third approach…
Crawl your own content when the client is a bot
Although this seems counterintuitive at first, this is one of the approaches Google suggests when you're dealing with sites whose majority of the content is generated via javascript (all single-page apps fall on this category).
The idea is simple: if the requester of your content is a search bot, spawn a SECOND request to your own server, render the page using a headless browser (thankfully, there are many, many options to choose from in Ruby) and return that "pre-compiled" content back to the search bot. Boom.
The biggest advantage of this approach is that you don't have to duplicate anything: with a single intercepting script, you can render any different page and retrieve them as needed. Another positive point of this approach is that the content search bots will see is exactly what a final user would.
You can implement this rendering in several ways:
A before_filter on your controllers checks the user-agent making the request, then fetches the desired content and return it. PROS: all vanilla-Rails approach. CONS: you're hitting the entire rails stack TWICE.
Have a Rack middleware detect the user-agent and initiate the request for the rendered content. PROS: still self-contained on the app approach. CONS: need to be careful on which content will be served, since the middleware will intercept all requests.
Have the web server (nginx, apache) handle the user-agent and send requests to a different server/endpoint on your server (eg.: example.com/static/original/route/here) that will serve the static content. PROS: only one request hits your app, CONS: requires poking around the underlying server infrastructure.
As for how to store the server-side rendered content (again, from worst to best):
Re-render things on every request. PROS: no cache-validation hell, CONS: performance.
Cache rendered pages, optimally with a reasonable expiration time PROS: much faster than re-fetching/rendering pages every time, CONS: cache maintenance might be an issue.
Cache rendered pages in a way that the web server can fetch them directly (eg.: save them as temp files). PROS: INSANELY FAST. CONS: huge cache maintenance overhead.
There are a few obvious issues you have to keep in mind when using this approach:
Every request made by a search bot will consume two processes on your web server (since you're calling yourself back to get content to return to the bot) The render time is a major factor when search engines rank your content, so fast responses here are paramount (caching the static versions of pages, therefore, is very important).
Some hosting services (I'm looking at you, Heroku) might not support the tools you use to render the content on the server side (capybara-webkit, selenium, etc). Either switch servers or simply host your "staticfying" layer somewhere else.
4 notes · View notes
herval · 12 years ago
Link
EVERY iOS developer should give this a try - if nothing else, testing a Rubymotion app is AMAZING!
It has been exactly a year since we launched RubyMotion. Yep, this is right, RubyMotion is one year old!
Tumblr media
We released over the year a total of 35 software updates fixing countless bugs reported by our beloved users. That’s almost 3 updates per month on average!
We shipped significant...
29 notes · View notes
herval · 13 years ago
Text
I just got the most hilarious job offer ever.
[Details about the company and sender obviously omitted to preserve their privacy. You can skip straight to the benefits section, which is downright priceless. Gotta admire the guy for trying, though (I guess). -- Hi Herval, I found your information on XXXXXX. I apologize for the long email or if I am mistaken about your potential interest. About Us: We are a young start up called XXXX. We are a XXXXX for XXXXX who are XXXXXX and we solve that by XXXXXX for XXXXXX. What We Need: Though we have a site and system setup, it needs to be upgraded and redone completely. It is essentially a Frankenstein site with much room for error and inefficiency. It is not intuitive and not built to scale. We're looking for someone to redo the whole site (design+dev) from scratch with a focus on intuitive/appealing design. What we need: yes, there is actually a very detailed, 20 pages spec attached to the email. And the product exists! The Fine Print: We are still very much bootstrapping the business and have only approximately $3,500 to spend on this project. I am well aware that this is NOT a lot of money for high quality coding and i'm not one of those douches that doesn't value the effort/talent of a good programmer; we're just simply not in the financial position to pay you what you deserve.. So why would you even think of taking us on?? * I will name my first born child after you. (If this is your type of thing, we can also negotiate naming my second and third born children after you. * My dog is available for a name change as well. And my goldfish...) * We will list you on the new site as part of the team with a link to any website you'd like or just an email. * I'll send you a weekly 6 pack for the next month. For real. * I'll follow you on FB and Twitter (who doesn't like more followers?!) and comment positively on anything you post. * I'll openly declare you as rad, awesome and "the man" on all my social media accounts and again link to your site or wherever you want (no NSFW links). * I'll owe you one. Hell, let's make it three. I'll owe you three. * I'll wash your car. Or pay to have it washed. * Will act as your allibi on three separate occasions. (Eg. Now you can go play poker on Wednesday night and just tell your significant other you were at a business meeting. I will back you up.) * You'll get first dibs on all future projects (such as courseloads.com planned for Fall 2013) * Make a request, and let me see if I can accomodate it! If you read this far, thanks! If you're at all interested or have any questions, please let me know. Would really love to work with you. Hope to hear from you soon, X](null)
2 notes · View notes
herval · 13 years ago
Text
Third is a charm: interactive TVs and pipe dreams
My third company came almost five years after the second one (not counting the "pet projects" that happened in this period). After coming back to Brazil from a less than pleasant experience working in a large, soul-sucking company in Europe, I felt it was time to try something on my own again - an idea my wife promptly embraced (and joined as a founding partner). Being in love with "smart hardware" for quite some time (I used to build half-baked "robots" and electronic boards back when ISA was the top-notch standard for computer boards), I decided to give a shot at a hardware-centric idea this time. We spent a couple of months researching markets - everything from smart houses (too small and fragmented market) to public camera systems with face identification (too tricky to sell) and even smart watches (to the likes of the Pebble - too expensive to build and we had zero expertise on the whole supply chain). We ended up settling on the "interactive environments" niche, which is supposedly cheaper to sell and usually boils down to software running on hardware you buy on local shops. Interactive environments are basically an attempt to augment the "real world" with digital, either by bringing online presence to physical environments or by using sensors to detect people and serve them with interesting digital information: think interactive windows, billboards you can send tweets and the like. Not knowing the market from inside out and knowing exactly which tech could we focus on, we took a few weeks building quick proofs-of-concept, talking to customers in different segments and all in all getting to know the marketplace in order to take a well-positioned first step. And boy, did we pivot. After attempting all sorts of different "MVPs" - bluetooth marketing (contextually sending images and ringtones), touch-screen kiosks, projecting twitter-feeds in walls for events, little GSM boxes one could use to make "interactive SMS polls" and using Kinects to detect purchasing intentions inside supermarkets - we eventually settled down on the more basic "digital signage" package, licensing our platform for media companies that would like to roll out their own indoor marketing networks. Digital signage (we're talking a niche inside a niche, now!) is not a new thing: it compromises all forms of digital communication using billboards, indoor TVs or projectors, which can be used to convey information, entertain people while they wait in line, etc. Think the flight information screens you see at airports, or TVs on bars and supermarkets. The big pain points we were determined to fix here were the deploy complexity of these systems (updating content requires physical access to each machine) or the absurd lack of features they provide (most digital signage "solutions" out there are just recorded video or powerpoint decks playing non-stop). And thus was born BlooBox: our 100% online, one-click install, remote manageable digital signage solution that supported not just displaying text, photos and videos on TV screens, but also polling for live tweets on any topic or hashtag, splitting screen with several different kinds of content (want to put a ticker over a video stream with an announcement on the side? No problem!). ##Tech lock-in One thing we quickly learned the worst possible way was that building rich desktop applications is surprisingly hard. There are just no options out there to build the kind of thing we had in mind on (a client app that could self-update, display dynamic content, run on any platform and allow the users to build their own content with tools they know how to use, like Flash or HTML) - the only two contestants being Adobe Air and Microsoft Silverlight, both "mature" by market standards, but frustratingly restrictive in every way possible. There are memory leaks everywhere (making the app crash every X hours). You can't do two things at the same time (Adobe Flash is single-threaded, meaning your entire UI freezes when you try to download a piece of content for later displaying, for instance). There's no documentation whatsoever on things such as loading modules/content, or automatic updating. The list of problems and pain points kept creeping up by the second, and for every bug we squished, another two were born. We even got in touch with the Adobe Air team itself, helping them debug a couple of memory leaks on foundation components of the platform (which led us to use even flimsier beta versions for long periods of time). Despite all that, with a bit of glue and a lot of spit, we were able to put together a functional product in a few months. ##How do you price that thing? Along with lack of adoption, pricing is the number one killer of companies/projects. Not knowing exactly how to make customers pay, we toyed with all sorts of sales models, from pay-per-use to pay-per-plugin, as well as the overly popular "ad supported" model that works so well online (but was a complete disaster in our offline context - more on that later). The big problem, in our case, was due to the horizontal nature of the market: the customers included everyone from bars (which will pay nothing for a product) to franchise networks (great clients, but with a very long sales process) and single-venue businesses, such as hospitals or clinics. Given our location (yes, [here we were again](http://hervalicio.us/post/33446530406/too-many-hands-spoil-the-fudge-a-post-mortem), trying to start a product in the worst possible market. At least it was supposed to be "just during the beta period" this time…), we had only a handful of possible clients on each of those verticals, so specializing was not an option. Driven mostly by what others were doing, we ended up settling on the "pay per TV" model, with a low monthly value (~48 USD/each) and free installation. The customer would have to buy his own TV and computer, though, which was costly enough for most small ones, and the support cost was an obvious red herring: the more hardware you have distributed in a wide area, the more people you need to keep everything working. You'd be surprised how often customers call in just so that you find that the system "is not working again!!!" because they've unplugged it from the power plug or simply forgot to turn on the TV… Less than a year in from the day one, we were finally breaking even. But the outlook was not good: we had to make more money per customer, if we wanted to be able to keep growing at all. Our final big mistake was, ironically, trying to "eat our own dog food": determined to not just offer the software for third-parties, we decided to go the extra mile and actually manage our own indoor media network. ##The beginning of the end: "let's just bootstrap the market!" Looking back at it, this was possibly the dumbest decision I did in my entire life. The final change on sales model we did on the company would require a big swing in the way we presented the product to everyone: no longer would bar owners be asked to buy TV sets or pay monthly fees (we'd give them all that for free). No longer we would spend time energy and countless trips trying to convince every single layer of management on a franchise network that they should get our product and roll it continent-wide. Our sole sales would turn on trying to convince mid-to-large sized companies to advertise on our TV screens, positioned on half a dozen public spots. If we really wanted anyone to look at our network and see some potential for advertising, we'd have to quickly expand too - and so we did, growing from 6 paying customers to almost 20 free ones in less than 2 weeks (which positioned us as the second biggest ad network in the region). That single decision that costed a fortune on hardware acquisition, incurred an even larger overhead on support and deviated us too far from what should have been our sole focus: providing software as a service, get paid for it, rinse and repeat. Selling advertising space proved itself a tedious, impossible task and undermined the morale of the team to a point that most of us simply started hating to get out of bed for yet another day of trying to stay afloat. It's worth noting that we didn't just fall on this trap out of the blue: the entire ad-supported digital signage idea was a trend worldwide, with big-name networks being [acquired all over](http://www.dailydooh.com/archives/71014). It was obviously "the way to go". Today, a lot of those companies, even large ones, [have also died out](http://www.wirespring.com/dynamic_digital_signage_and_interactive_kiosks_journal/articles/M_A_List__Digital_Signage_Mergers__Acquisitions_and_Bankruptcies-769.html), victim of the same uneven expectation about a market trend that didn't materialize. If there was any money to be made on the ad-supported networks, it was made by very few, deep-pocket giants. The money we spent on this "wild goose hunt" was sorely missed, in the end, and could have kept us afloat for another year. It still saddens me to see that no one has ever done the things we pioneered in those little MVPs. The whole "interactive environment" idea is a very exciting one - if only the associated cost wasn't so big, the tools to build it so archaic. Maybe in a few years - until then, here we are, stuck with clunky kiosks, useless interactive floor-projections and TVs displaying ads that no one pays attention to… In the end, the company wasn't a total failure: we managed to sell it, a mere couple of weeks before running out of cash, and decided to move on to something less hardware related, more viral. Time for some buzzwords!
0 notes