“Good debugger worth weight in shiny rocks, in fact also more”
I’ve spent time at small startups and on “elite” big tech teams, and I’m usually the only one on my team using a debugger. Almost everyone in the real world (at least in web tech) seems to do print statement debugging. I have tried and failed to get others interested in using my workflow.
I generally agree that it’s the best way to start understanding a system. Breaking on an interesting line of code during a test run and studying the call stack that got me there is infinitely easier than trying to run the code forwards in my head.
Young grugs: learning this skill is a minor superpower. Take the time to get it working on your codebase, if you can.
demosthanos 3 hours ago [-]
There was a good discussion on this topic years ago [0]. The top comment shares this quote from Brian Kernighan and Rob Pike, neither of whom I'd call a young grug:
> As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.
I tend to agree with them on this. For almost all of the work that I do, this hypothesis-logs-exec loop gets me to the answer substantially faster. I'm not "trying to run the code forwards in my head". I already have a working model for the way that the code runs, I know what output I expect to see if the program is behaving according to that model, and I can usually quickly intuit what is actually happening based on the incorrect output from the prints.
There's another story I heard once from Rob Pike about debugging. (And this was many years ago - I hope I get the details right).
He said that him and Brian K would pair while debugging. As Rob Pike told it, he would often drive the computer, putting in print statements, rerunning the program and so on. Brian Kernighan would stand behind him and quietly just think about the bug and the output the program was generating. Apparently Brian K would often just - after being silent for awhile - say "oh, I think the bug is in this function, on this line" and sure enough, there it was. Apparently it happened so often enough that he thought Brian might have figured out more bugs than Rob did, even without his hands touching the keyboard.
Personally I love a good debugger. But I still think about that from time to time. There's a good chance I should step away from the computer more often and just contemplate it.
artursapek 12 minutes ago [-]
some of my best work as a programmer is done walking my dog or sitting in the forest
recursivedoubts 59 minutes ago [-]
I think a lot of “naturals” find visual debuggers pointless, but for people who don’t naturally intuit how a computer works it can be invaluable in building that intuition.
I insist that my students learn a visual debugger in my classes for this reason: what the "stack" really is, how a loop really executes, etc.
It doesn't replace thinking & print debugging, but it complements them both when done properly.
ItsHarper 18 minutes ago [-]
Agreed, I spent a lot more time using debuggers when I was getting started
sethjgore 55 minutes ago [-]
What do you mean “visual debugger?”
porridgeraisin 32 minutes ago [-]
In vscode when you step to the next statement it highlights in the left pane the variables that change. Something like that.
It's useful for a beginner e.g in a for loop to see how `i` changes at the end of the loop. And similarly with return values of functions and so on.
XorNot 40 minutes ago [-]
Presumably an IDE rather then dealing the gdb CLI.
kapildev 44 minutes ago [-]
Exactly, these judiciously placed print statements help me locate the site of the error much faster than using a debugger. Then, I could switch to using a debugger once I narrow things down if I am still unsure about the cause of the problem.
never_inline 21 minutes ago [-]
I use single-stepping very rarely in practice when using a debugger, except when following through a "value of a variable or two". Yet it's more convenient than pprint.pprint() for that because structured display of values, eval expression, and ability to inspect callers up the stack.
james_marks 2 hours ago [-]
Adding these print statements is one of my favorite LLM use cases.
Hard to get wrong, tedious to type and a huge speed increase to visually scan the output.
Freedom2 1 hours ago [-]
Agreed. Typically my debugger use case is when I'm exploring a potentially unknown range of values at a specific point in time, where I also might not know how to log it out. Having the LLM manage all of that for me and get it 95% correct is the real minor superpower.
throwaway173738 57 minutes ago [-]
I tend not to use a debugger for breakpoints but I use it a lot for watchpoints because I can adjust my print statements without restarting the program
titanomachy 2 hours ago [-]
I do a lot of print statements as well. I think the greatest value of debuggers comes when I’m working on a codebase where I don’t already have a strong mental model, because it lets me read the code as a living artifact with states and stack traces. Like Rob Pike, I also find single-stepping tedious.
jacques_chester 1 hours ago [-]
> Brian Kernighan and Rob Pike
Most of us aren't Brian Kernighan or Rob Pike.
I am very happy for people who are, but I am firmly at a grug level.
legends2k 10 minutes ago [-]
This! Also my guess would be Kernighan or Pike aren't (weren't?) deployed into some random codebase every now and then, while most grugs are. When you build something from scratch then you can get by without debuggers, sure, but foreign codebase, a stupid grug like I can do much better with tools.
geophile 2 hours ago [-]
I am also in the camp that has very little use for debuggers.
A point that may be pedantic: I don't add (and then remove) "print" statements. I add logging code, that stays forever. For a major interface, I'll usually start with INFO level debugging, to document function entry/exit, with param values. I add more detailed logging as I start to use the system and find out what needs extra scrutiny. This approach is very easy to get started with and maintain, and provides powerful insight into problems as they arise.
I also put a lot of work into formatting log statements. I once worked on a distributed system, and getting the prefix of each log statement exactly right was very useful -- node id, pid, timestamp, all of it fixed width. I could download logs from across the cluster, sort, and have a single file that interleaved actions from across the cluster.
AdieuToLogic 15 minutes ago [-]
> A point that may be pedantic: I don't add (and then remove) "print" statements. I add logging code, that stays forever. For a major interface, I'll usually start with INFO level debugging, to document function entry/exit, with param values.
This is an anti-pattern which results in voluminous log "noise" when the system operates as expected. To the degree that I have personally seen gigabytes per day produced by employing it. It also can litter the solution with transient concerns once thought important and are no longer relevant.
If detailed method invocation history is a requirement, consider using the Writer Monad[0] and only emitting log entries when either an error is detected or in an "unconditionally emit trace logs" environment (such as local unit/integration tests).
What I find annoying is how these async toolkits screw up the stack trace, so I have little what the real program flow looks like. That reduces much of the benefit off the top.
Some IDEs promise to solve that, but I’ve not been impressed thus far.
YMMV based on language/runtime/toolkit of course. This might get added to my wishlist for my next language of choice.
hobs 2 hours ago [-]
A log is very different than a debugger though, one tells you what happened, one shows you the entire state and doesn't make you assemble it in your head.
demosthanos 2 hours ago [-]
Your framing makes it sound like the log is worse in some way, but what the log gives you that the debugger makes you assemble in your head is a timeline of when things happen. Being able to see time is a pretty big benefit for most types of software.
I can always drop an entire state object into the log if I need it, but the only way for a debugger to approximate what a log can give me is for me to step through a bunch of break points and hold the time stream in my head.
The one place where a debugger is straight up better is if I know exactly which unit of code is failing and that unit has complicated logic that is worth stepping through line by line. That's what they were designed for, and they're very useful for that, but it's also not the most common kind of troubleshooting I run into.
switchbak 27 minutes ago [-]
In the early 2000’s I whipped up a tool to convert log statements into visual swim lanes like the Chrome profiler does. That thing was a godsend for reasoning about complex parallelism.
hobs 2 hours ago [-]
It's not worse or better, but its not really comparable is all I am really saying, I would not use them for the same things.
nyarlathotep_ 2 hours ago [-]
> and I’m usually the only one on my team using a debugger. Almost everyone in the real world (at least in web tech) seems to do print statement debugging.
One of the first things I do in a codebase is get some working IDE/editor up where I can quickly run the program under a debugger, even if I'm not immediately troubleshooting something. It's never long before I need to use it.
I was baffled when I too encountered this. Even working collaboratively with people they'd have no concept of how to use a debugger.
"No, set a breakpoint there"
"yeah now step into the function and inspect the state of those variables"
"step over that"
: blank stares at each instance :
roncesvalles 3 hours ago [-]
I'd love to use a real debugger but as someone who has only ever worked at large companies, this was just never an option. In a microservices mesh architecture, you can't really run anything locally at all, and the test environment is often not configured to allow hooking up a stepping debugger. Print debugging is all you have. If there's a problem with the logging system itself or something that crashes the program before the logs can flush, then not even that.
alisonatwork 29 minutes ago [-]
This is basically it. When I started programming in C, I used a debugger all the time. Even a bit later doing Java monoliths I could spin up the whole app on my local and debug in the IDE. But nowadays running a dozen processes and containers and whatnot, it's just hopeless. The individual developer experience has gone very much backwards in the microservice era so the best thing to do is embrace observability, feature toggles etc and test in prod or a staging environment somewhere outside of your local machine.
frollogaston 2 hours ago [-]
Same, this isn't my choice, debuggers don't work here. And we don't even have microservices.
idontwantthis 2 hours ago [-]
At my company our system is composed of 2 dozen different services and all of them can run locally in minikube and easily be debugged in jetbrains.
wagwang 1 hours ago [-]
I find that debuggers solve a very specific class of bugs of intense branching complexity in a self contained system. But the moment there's stuff going in and out of DBs, other services, multithreading, integrations, etc, the debugger becomes more of a liability without a really advanced tooling team.
Buttons840 15 minutes ago [-]
Another underutilized debugging superpower is debug-level logging.
I've never worked somewhere where logging is taken seriously. Like, our AWS systems produce logs and they get collected somewhere, but none of our code ever does any serious logging.
If people like print-statement debugging so much, then double down on it and do it right, with a proper logging framework and putting quality debug statements into all code.
never_inline 25 minutes ago [-]
I am young grug who didn't use debuggers much until last year or so.
What sold me on debugger is the things you can do with it
* See values and eval expressions in calling frames.
* Modify the course of execution by eval'ing a mutating expression.
* Set exception breakpoints which stop deep where the exception is raised.
The rise of virtualization, containers, microservices, etc has I think contributed to this being more difficult. Even local dev-test loops often have something other than you launching the executable, which can make it challenging to get the debugger attached to it.
Not any excuse, but another factor to be considered when adding infra layers between the developer and the application.
bluefirebrand 13 minutes ago [-]
Debuggers are also brittle when working with asynchronous code
Debuggers actually can hide entire categories of bugs caused by race conditions when breakpoints cause async functions to resolve in a different order than they would when running in realtime
avhception 3 hours ago [-]
Using a debugger on my own code is easy and I love it.
The second the debugger steps deep into one of the libs or frameworks I'm using, I'm lost and I hate it.
That framework / lib easily has many ten thousands of person-hours under it's belly, and I'm way out of my league.
ronjakoi 1 hours ago [-]
But you can just use the "step out" feature to get back out when you realise you've gone into a library function. Or "step over" when you can see you're about to go into one.
never_inline 18 minutes ago [-]
IDEs tend to have a "just my code" option.
PaulHoule 3 hours ago [-]
I would tend to say that printf debugging is widespread in the Linux-adjacent world because you can't trust a visual debugger to actually be working there because of the general brokenness of GUIs in the Linux world.
I didn't really get into debuggers until (1) I was firmly in Windows, where you expect the GUI to work and the LI to be busted, and (2) I'd been burned too many times by adding debugging printfs() that got checked into version control and caused trouble.
Since then I've had some adventures with CLI debuggers, such as using gdb to debug another gdb, using both jdb and gdb on the same process at the same time to debug a Java/C++ system, automating gdb, etc. But there is the thing, as you say, is that there is usually some investment required to get the debugger working for a particular system.
With a good IDE I think Junit + debugging gives an experience in Java similar to using the REPL in a language like Python in that you can write some code that is experimental and experiment it, but in this case the code doesn't just scroll out of the terminal but ultimately gets checked in as a unit test.
ses1984 2 hours ago [-]
Debuggers exist in the terminal, in vim, and in emacs.
bandrami 2 hours ago [-]
Why would you want a GUI debugger?
PaulHoule 2 hours ago [-]
Can see all your source code while you're debugging. And it's not like emacs where your termcap is 99.99% right which means it is 0.01% wrong. (Mac-ers get made when something is 1px out of place, in Linux culture they'll close a bug report if the window is 15000 px to the left of the screen and invisible because it's just some little fit and finish thing)
rendaw 13 minutes ago [-]
Are you more productive with a debugger? For all bugs? How so?
btreecat 48 minutes ago [-]
I've had more sr devs than I tell me they don't see a benefit to using a debugger, because they have a type system.
Wtaf?
bloomca 2 hours ago [-]
Personally, I think that debugger is very helpful in understanding what is going on, but once you are familiar with the code and data structures, I am very often pretty close in my assessment, so scanning code and inserting multiple print lines is both faster and more productive.
I only used debugger recently in C# and C, when I was learning and practicing them.
oh_my_goodness 2 hours ago [-]
A lot of people think that. That's why it's important to read the essay.
mlinhares 46 minutes ago [-]
I don't use debuggers in general development but use them a lot when writing and running automated tests, much faster and easier to see stuff than with print statements.
novia 3 hours ago [-]
Well, what's your workflow? Is there a particular debugger that you love?
titanomachy 2 hours ago [-]
I’ve learned not to go against the grain with tools, at least at big companies. Probably some dev productivity team has already done all the annoying work needed to make the company’s codebase work with some debugger and IDE, so I use that: currently, it’s VS Code and LLDB, which is fine. IntelliJ and jdb at my last job was probably better overall.
My workflow is usually:
1. insert a breakpoint on some code that I’m trying to understand
2. attach the debugger and run any tests that I expect to exercise that code
3. walk up and down the call stack, stepping occasionally, reading the code and inspecting the local variables at each level to understand how the hell this thing works and why it’s gone horribly wrong this time.
4. use my new understanding to set new, more relevant breakpoints; repeat 2-4.
Sometimes I fiddle with local variables to force different states and see what happens, but I consider this advanced usage, and anyway it often doesn’t work too well on my current codebase.
ipsento606 3 hours ago [-]
I've been doing this professionally for over a decade and have basically never used a debugger
I've often felt that I should, but never enough to actually learn how
2 hours ago [-]
shadowgovt 41 minutes ago [-]
It has gotten to the point where when somebody wants to add a DSL to our architecture one of my first questions is "where is your specification for integrating it to the existing debuggers?"
If there isn't one, I'd rather use a language with a debugger and write a thousand lines of code than 100 lines of code in a language I'm going to have to black box.
butterlesstoast 6 hours ago [-]
Professor Carson if you're in the comments I just wanted to say from the bottom of my heart thank you for everything you've contributed. I didn't understand why we were learning HTMX in college and why you were so pumped about it, but many years later I now get it. HTML over the wire is everything.
I've seen your work in Hotwire in my role as a Staff Ruby on Rails Engineer. It's the coolest thing to see you pop up in Hacker News every now and then and also see you talking with the Hotwire devs in GitHub.
Thanks for being a light in the programming community. You're greatly respected and appreciated.
recursivedoubts 6 hours ago [-]
i'm not crying your crying
deadbabe 6 hours ago [-]
Wasn’t HTMX just a meme? I can’t really tell if it’s serious because of Poe’s Law.
well, at least he is (you are?) consistent in this style of criticizing others' ideas with satirical sarcasm fueled prose focused on tearing down straw men.
If you read this and concluded that it's bad, then you probably shouldn't use it.
brushfoot 5 hours ago [-]
Solopreneur making use of it in my bootstrapped B2B SaaS business. Clients don't need or want anything flashy. There are islands of interactivity, and some HTMX sprinkled there has been a great fit.
deadbabe 5 hours ago [-]
Wish I had your clients, instead of ones that say a page needs more “pizazz!”
wvbdmp 5 hours ago [-]
The pizazz clients want sites for their customers, the no-frills clients want sites for them to use themselves.
aspenmayer 5 hours ago [-]
I’m getting zombo.com vibes from this client request.
dgb23 5 hours ago [-]
I started using htmx relatively early on, because its a more elegant version of what I've been doing anyways for a series of projects.
It's very effective, simple and expressive to work this way, as long as you keep in mind that some client side rendering is fine.
There are a few bits I don't like about it, like defaulting to swap innerHTML instead of outerHTML, not swapping HTML when the status code isn't 200-299 by default and it has some features that I avoid, like inline JSON on buttons instead of just using forms.
So many gems in here but this one about microservices is my favorite:
grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too
default-kramer 4 hours ago [-]
I'm convinced that some people don't know any other way to break down a system into smaller parts. To these people, if it's not exposed as a API call it's just some opaque blob of code that cannot be understood or reused.
dkarl 4 hours ago [-]
That's what I've observed empirically over my last half-dozen jobs. Many developers treat decomposition and contract design between services seriously, and work until they get it right. I've seen very few developers who put the same effort into decomposing the modules of a monolith and designing the interfaces between them, and never enough in the same team to stop a monolith from turning into a highly coupled amorphous blob.
My grug brain conclusion: Grug see good microservice in many valley. Grug see grug tribe carry good microservice home and roast on spit. Grug taste good microservice, many time. Shaman tell of good monolith in vision. Grug also dream of good monolith. Maybe grug taste good monolith after die. Grug go hunt good microservice now.
pbh101 2 hours ago [-]
Maybe the friction imposed to mess up the well-factored microservice arch is sufficiently marginally higher than in the monolith that the perception of value in the factoring is higher, whereas the implicit expectation of factoring the monolith is that you’ll look away for five seconds and someone will ruin it.
stavros 3 hours ago [-]
We've solved this problem by making the modules in the monolith only able to call each other from well-defined APIs, otherwise CI fails.
PaulHoule 3 hours ago [-]
In the Java world both Spring and Guice are meant to do this, and if you have an ISomething you've got the possibility of making an ILocalSomething and a IDistributedSomething and swap one for the other.
pbh101 2 hours ago [-]
This is generally a bad idea imo. You fundamentally will have a hard time if your api is opaquely network-dependent or not. I suppose, you’ll be ok if you assume there is a network call, but that means your client will need to pay that cost every time, even if using the ILocal.
PaulHoule 2 hours ago [-]
It depends on what the API is. For instance you might use something like JDBC or SQLAlchemy to access either a sqlite database or a postgres database.
But you are right that the remote procedure call is a fraught concept for more reasons than one. On one hand there is the fundamental difference between a local procedure call that takes a few ns and a remote call which might take 1,000,000 longer. There's also the fact that most RPC mechanisms that call themselves RPC mechanisms are terribly complicated, like DCOM or the old Sun RPC. In some sense RPC became mainstream once people started pretending it was REST. People say it is not RPC but often you have a function in your front end Javascript like fetch_data(75) and that becomes GET /data/75 and your back end JAXB looks like
@GET
@Path("/{id}")
public List<Data> fetchData(@PathParam("id") int id) { ... }
demosthanos 3 hours ago [-]
> To these people, if it's not exposed as a API call it's just some opaque blob of code that cannot be understood or reused.
I think this is correct as an explanation for the phenomenon, but it's not just a false perception on their part: for a lot of organizations it is actually true that the only way to preserve boundaries between systems over the course of years is to stick the network in between. Without a network layer enforcing module boundaries code does, in fact, tend to morph into a big ball of mud.
I blame a few things for this:
1. Developers almost universally lack discipline.
2. Most programming languages are not designed to sufficiently account for #1.
It's not a coincidence that microservices became popular shortly after Node.js and Python became the dominant web backend languages. A strong static type system is generally necessary (but not sufficient) to create clear boundaries between modules, and both Python and JavaScript have historically been even worse than usual for dynamic languages when it comes to having a strong modularity story.
And while Python and JS have it worse than most, even most of our popular static languages are pretty lousy at giving developers the tools needed to clearly delineate module boundaries. Rust has a pretty decent starting point but it too could stand to be improved.
giantrobot 3 hours ago [-]
3. Company structure poorly supports cross-team or department code ownership
Many companies don't seem to do a good job coordinating between teams. Different teams have different incentives and priorities. If group A needs fixes/work from group B and B has been given some other priority, group A is stuck.
By putting a network between modules different groups can limit blast damage from other teams' modules and more clearly show ownership when things go wrong. If group A's project fails because of B's module it still looks like A's code has the problem.
Upper management rarely cares about nuance. They want to assign blame, especially if it's in another team or department. So teams under them always want clear boundaries of responsibility so they don't get thrown under the bus.
The root cause of a lot of software problems is the organization that produces it more than any individual or even team working on it.
jiggawatts 5 hours ago [-]
I keep trying to explain this to tiny dev teams (1-2 people) that will cheerfully take a trivial web app with maybe five forms and split it up into “microservices” that share a database, an API Management layer, a queue for batch jobs to process “huge” volumes (megabytes) of data, an email notification system, an observablity platform (bespoke!) and then… and then… turn the trivial web forms into a SPA app because “that’s easier”.
Now I understand that “architecture” and “patterns” is a jobs program for useless developers. It’s this, or they’d be on the streets holding a sign saying “will write JavaScript for a sandwich”.
frollogaston 2 hours ago [-]
The only useful definition of a "service" I've ever heard is that it's a database. Doesn't matter what the jobs and network calls are. One job with two DBs is two services, one DB shared by two jobs is one service. We once had 10 teams sharing one DB, and for all intents and purposes, that was one huge service (a disaster too).
mattmanser 5 hours ago [-]
It's all they've seen. They don't get why they're doing it, because they're junior devs masquerading as architects. There's so many 'senior' or 'architect' level devs in our industry who are utterly useless.
One app I got brought in late on the architect had done some complicated mediator pattern for saving data with a micro service architecture. They'd also semi-implemented DDD.
It was a ten page form.
Literally that was what it was supposed to replace. An existing paper, 10 page, form. One of those "domains" was a list of the 1,000 schools in the country. That needed to be updated once a year.
A government spent millions on this thing.
I could have done it on my todd in 3 months. It just needed to use simple forms, with some simple client side logic for hiding sections, and save the data with an ORM.
The funniest bit was when I said that it couldn't handle the load because the architecture had obvious bottlenecks. The load was known and fairly trivial (100k form submissions in one month).
The architect claimed that it wasn't possible as the architecture was all checked and approved by one of the big 5.
So I brought the test server down during the call by making 10 requests at once.
nyarlathotep_ 2 hours ago [-]
> It's all they've seen. They don't get why they're doing it, because they're junior devs masquerading as architects. There's so many 'senior' or 'architect' level devs in our industry who are utterly useless.
This is the real, actual conversation to be had about "AI taking jobs."
I've seen similar things a lot in the private sector.
There's just loads of people just flailing around doing stuff without really having any expertise other than some vague proxy of years of experience.
It's really not even exactly their fault (people have lives that don't revolve around messing about with software systems design, sure, and there's no good exposure to anything outside of these messes in their workplaces).
But, outside of major software firms (think banks, and other non-"tech" F500s; speaking from experience here) there's loads of people that are "Enterprise Architects" or something that basically spend 5 hours a day in meetings and write 11 lines of C# or something a quarter and then just adopt ideas they heard from someone else a few years back.
Software is really an utterly bizarre field where there's really nothing that even acts as valuable credentials or experience without complete understanding of what that "experience" is actually comprised of. I think about this a lot.
jiggawatts 4 hours ago [-]
> So I brought the test server down during the call by making 10 requests at once.
Back in the very early 2000s I got sent to "tune IIS performance" at a 100-developer ISV working on a huge government project.
They showed me that pressing the form submit button on just two PCs at once had "bad performance".
No, not it didn't. One was fast[1], the other took 60 seconds almost exactly. "That's a timeout on a lock or something similar", I told them.
They then showed me their 16-socket database server that must have cost them millions and with a straight face asked me if I thought that they needed to upgrade it to get more capacity. Upgrade to what!? That was the biggest machine I have ever seen! I've never in the quarter century since then seen anything that size with my own two eyes. I don't believe bigger Wintel boxes have ever been made.
I then asked their database developers how they're doing transactions and whether they're using stored procedures or not.
One "senior" database developer asked me what a stored procedure is.
The other "senior" database developer asked me what a transaction is.
"Oh boy..."
[1] Well no, not really, it took about a second, which was long enough for a human button press to to "overlap" the two transactions in time. That was a whole other horror story of ODBC connection pooling left off and one-second sleeps in loops to "fix" concurrency issues.
djeastm 5 hours ago [-]
>I keep trying to explain this to tiny dev teams
I'm curious what role you have where you're doing this repeatedly
jiggawatts 4 hours ago [-]
The customer is a government department formed by the merger of a bunch of only vaguely related agencies. They have “inherited” dozens of developers from these mergers, maybe over a hundred if you count the random foreign outsourcers. As you can imagine there’s no consistency or organisational structure because it wasn’t built up as a cohesive team from the beginning.
The agencies are similarly uncoordinated and will pick up their metaphorical credit card and just throw it at random small dev teams, internally, external, or a mix.
Those people will happily take the credit! The money just… disappears. It’s like a magic trick, or one of those street urchins that rips you off when you’re on holiday in some backwards part of the world like Paris.
I get brought in as “the cloud consultant” for a week or two at the end to deploy the latest ball of mud with live wires sticking out of it to production.
This invariably becomes an argument because the ball of mud the street urchins have sold to the customer is not fit for… anything… certainly not for handling PII or money, but they spent the budget and the status reports were all green ticks for years.
Fundamentally, the issue is that they're "going into the cloud" with platform as a service, IaC, and everything, but at some level they don't fully grok what that means and the type of oversight required to make that work at a reasonable cost.
"But the nice sales person from Microsoft assured me the cloud is cheaper!"
angry_octet 4 hours ago [-]
Omg this is something I have experienced too many times, and constantly warring with the other side of the coin: people who never want to make any change unless it is blessed by a consultant from Microsoft/VMWare/SAP and then it becomes the only possible course of action, and they get the CIO to sign off on some idiocy that will never work and say "CIO has decreed Project Falcon MUST SUCCEED" when CIO can't even tie his shoelaces. Giant enterprise integration will happen!
In fact we're going through one of these SAP HANA migrations at present and it's very broken, because the prime contractor has delivered a big ball of mud with lots of internal microservices.
4 hours ago [-]
vermilingua 4 hours ago [-]
Is this DCS in NSW? If so that would explain so much about my own work interactions with them.
someothherguyy 5 hours ago [-]
> Now I understand that “architecture” and “patterns” is a jobs program for useless developers.
Yet, developers are always using patterns and are thinking about architecture.
Here you are doing so too, a pattern, "form submission" and an architecture, "request-response".
fellatio 3 hours ago [-]
Unfortunately it is useful to do this for many other reasons!
api 3 hours ago [-]
I have a conspiracy theory that it’s a pattern pushed by cloud to get people to build applications that:
- Cannot be run without an orchestrator like K8S, which is a bear to install and maintain, which helps sell managed cloud.
- Uses more network bandwidth, which they bill for, and CPU, which they bill for.
- Makes it hard to share and maintain complex or large state within the application, encouraging the use of more managed database and event queue services as a substitute, which they bill for. (Example: a monolith can use a queue or a channel, while for microservices you’re going to want Kafka or some other beast.)
- Can’t be run locally easily, meaning you need dev environments in cloud, which means more cloud costs. You might even need multiple dev and test environments. That’s even more cloud cost.
- Tends to become dependent on the peculiarities of a given cloud host, such as how they do networking, increasing cloud lock in.
Anyone else remember how cloud was pitched as saving money on IT? That was hilarious. Knew it was BS way back in the 2000s and that it would eventually end up making everything cost more.
nyarlathotep_ 2 hours ago [-]
It's 100% this; you're right on the money (pun intended).
Don't forget various pipelines, IaC, pipelines for deploying IaC, test/dev/staging/whatever environments, organization permissions strategies etc etc...
When I worked at a large, uh, cloud company as a consultant, solutions were often tailored towards "best practices"--this meant, in reality, large complex serverless/containerized things with all sorts of integrations for monitoring, logging, NoSQL, queues etc, often for dinky little things that an RPI running RoR or NodeJS could serve without breaking a sweat.
With rare exceptions, we'd never be able to say, deploy a simple go server on a VM with server-side rendered templates behind a load balancer with some auto-scaling and a managed database. Far too pedestrian.
Sure, it's "best practices" for "high-availability" but was almost always overkill and a nightmare to troubleshoot.
api 42 minutes ago [-]
There is now an entire generation of developers steeped in SaaS who literally don’t know how to do anything else, and have this insanely distorted picture of how much power is needed to do simple things.
It’s hard to hire people to do anything else. People don’t know how to admin machines so forget bare metal even though it can be thousands of times cheaper for some work loads (especially bandwidth).
You’re not exaggerating with a raspberry pi. Not at all.
pphysch 3 hours ago [-]
Those are all good points, but missing the most important one, the "Gospel of Scalability". Every other startup wants to be the next Google and therefore thinks they need to design service boundaries that can scale infinitely...
arturocamembert 6 hours ago [-]
> given choice between complexity or one on one against t-rex, grug take t-rex: at least grug see t-rex
I think about this line at least once a week
EstanislaoStan 3 hours ago [-]
"...even as he fell, Leyster realized that he was still carrying the shovel. In his confusion, he’d forgotten to drop the thing. So, desperately, he swung it around with all his strength at the juvenile’s legs.
Tyrannosaurs were built for speed. Their leg bones were hollow, like a bird’s. If he could break a femur …
The shovel connected, but not solidly. It hit without breaking anything. But, still, it got tangled up in those powerful legs. With enormous force, it was wrenched out of his hands. Leyster was sent tumbling on the ground.
Somebody was screaming. Dazed, Leyster raised himself up on his arms to see Patrick, hysterically slamming the juvenile, over and over, with the butt of the shotgun. He didn’t seem to be having much effect. Scarface was clumsily trying to struggle to its feet. It seemed not so much angry as bewildered by what was happening to it.
Then, out of nowhere, Tamara was standing in front of the monster. She looked like a warrior goddess, all rage and purpose. Her spear was raised up high above Scarface, gripped tightly in both hands. Her knuckles were white.
With all her strength, she drove the spear down through the center of the tyrannosaur’s face. It spasmed, and died. Suddenly everything was very still."
boricj 4 hours ago [-]
grug obviously never took on invisible t-rex
this grug keeps one on one invisible t-rex, grug cursed
dgb23 5 hours ago [-]
One thing to appreciate is that this article comes from someone who can do the more sophisticated (complex) thing, but tries not to based on experience.
There is of course a time and place for sophistication, pushing for higher levels of abstraction and so on. But this grug philosophy is saying that there isn't any inherent value in doing this sort of thing and I think that is very sound advice.
Also I noticed AI assistance is more effective with consistent, mundane and data driven code. YMMV
ahartmetz 4 hours ago [-]
The time and place for sophistication and abstraction is when and where they make the code easier to understand without first needing a special course to explain why it's easier to understand. (It varies by situation which courses can be taken for granted.)
cowthulhu 1 hours ago [-]
I feel like this would fit the bell curve meme -
Novice dev writes simple code
Intermediate dev writes complex code
Expert dev writes simple code
cortesoft 4 hours ago [-]
> Everything should be made as simple as possible, but not simpler
GMoromisato 4 hours ago [-]
One of the many ironies of modern software development is that we sometimes introduce complexity because we think it will "save time in the end". Sometimes we're right and it does save time--but not always and maybe not often.
Three examples:
DRY (Don't Repeat Yourself) sometimes leads to premature abstraction. We think, "hey, I bet this pattern will get used elsewhere, so we need to abstract out the common parts of the pattern and then..." And that's when the Complexity Demon enters.
We want as many bugs as possible caught at compile-time. But that means the compiler needs to know more and more about what we're actually trying to do, so we come up with increasingly complex types which tax your ability to understand.
To avoid boilerplate we create complex macros or entire DSLs to reduce typing. Unfortunately, the Law of Leaky Abstractions means that when we actually need to know the underlying implementation, our head explodes.
Our challenge is that each of these examples is sometimes a good idea. But not always. Being able to decide when to introduce complexity to simplify things is, IMHO, the mark of a good software engineer.
mplanchard 2 hours ago [-]
For folks who seek a rule of thumb, I’ve found SPoT (single point of truth) a better maxim than DRY: there should be ideally one place where business logic is defined. Other stuff can be duplicated as needed and it isn’t inherently a bad thing.
To modulate DRY, I try to emphasize the “rule of three”: up to three duplicates of some copy/paste code is fine, and after that we should think about abstracting.
Of course no rule of thumb applies in all cases, and the sense for that is hard to teach.
GMoromisato 16 minutes ago [-]
100% agree. Duplication is far cheaper than the wrong abstraction.
Student: I notice that you duplicated code here rather than creating an abstraction for both.
Master: That is correct.
Student: But what if you need to change the code in the future?
Master: Then I will change it in the future.
At that point the student became enlightened.
PaulHoule 2 hours ago [-]
I still believe that most code, on average, is not DRY enough, but for projects I do on my own account I've recently developed a doctrine of "there are no applications, only screens" and funny enough this has been using HTMX which I think the author of that blog wrote.
Usually I make web applications using Sinatra-like frameworks like Flask or JAXB where I write a function that answers URLs that match a pattern and a "screen" is one or more of those functions that work together and maybe some HTML templates that go with them. For instance there might be a URL for a web page that shows data about a user, and another URL that HTMX calls when you flip a <select> to change the status of that user.
Assuming the "application" has the stuff to configure the database connection and file locations and draw HTML headers and footers and such, there is otherwise little coupling between the screens so if you want to make a new screen you can cut and paste an old screen and modify it, or you can ask an LLM to make you a screen or endpoint and if it "vibe coded" you a bad screen you can just try again to make another screen. It can make sense to use inheritance or composition to make a screen that can be specialized, or to write screens that are standalone (other than fetching the db connection and such.)
The origin story was that I was working on a framework for making ML training sets called "Themis" that was using microservices, React, Docker and such. The real requirement was that we were (1) always adding new tasks, and (2) having to create simple but always optimized "screens" for those tasks because if you are making 20,000 judgements it is bad enough to click 20,000 times, if you have to click 4x for each one and it adds up to 80,000 you will probably give up. As it was written you had to write a bunch of API endpoints as part of a JAXB application and React components that were part of a monolithic React app and wait 20 minutes for typescript and Docker and javac to do their things and if you are lucky it boots up otherwise you have to start over.
I wrote up a criticism of Themis and designed "Nemesis" that was designed for rapid development of new tasks and it was a path not taken at the old job, but Nemesis and I have been chewing through millions of instances of tasks ever since.
GMoromisato 23 minutes ago [-]
Fascinating!
I also recoiled at the complexity of React, Docker, etc. and went a different path: I basically moved all the code to the server and added a way to "project" the UI to the browser. From the program's perspective, you think you're just showing GUI controls on a local screen. There is no client/server split. Under the covers, the platform talks to some JavaScript on the browser to render the controls.
This works well for me since I grew up programming on Windows PCs, where you have full control over the machine. Check it out if you're interested: https://gridwhale.com.
I think pushing code to the server via HTMX and treating the browser like a dumb terminal has the same kind of advantage: you only have to worry about one system.
Fundamentally, IMHO, the client/server split is where all the complexity happens. If you're writing two programs, on the client and one on the server, you're basically creating a distributed system, which we know is very hard.
dmurray 3 hours ago [-]
I can't believe this is (2022). I would have confidently told you I read this 10 years ago and guessed that it was already a classic then.
poidos 6 hours ago [-]
This is, I think, my favorite essay about building software. The style is charming (I can see why some might not like it) and the content is always relevant.
minkzilla 5 hours ago [-]
sad but true: learn "yes" then learn blame other grugs when fail, ideal career advice
When I first entered the corporate world I thought this wasn’t true, there was just poor communication on part of technical teams. I learn I wrong. grug right.
mcqueenjordan 39 minutes ago [-]
One of my favorite LLM uses is to feed it this essay, then ask it to assume the persona of the grug-brained developer and comment on $ISSUE_IM_CURRENTLY_DEALING_WITH. Good stress relief.
12_throw_away 6 hours ago [-]
This has by far the best discussion of the visitor pattern I've yet to come across.
dgb23 5 hours ago [-]
I don't work in typical OO codebases, so I wasn't aware of what the visitor pattern even is. But there's an _excellent_ book about building an interpreter (and vm) "crafting interpreters". It has a section where it uses the visitor pattern.
I remember reading through it and not understanding why it had to be this complicated and then just used a tagged union instead.
Maybe I'm too stupid for OO. But I think that's kind of the point of the grug article as well. Why burden ourselves with indirection and complexity when there's a more straight forward way?
recursivedoubts 4 hours ago [-]
I love crafting interpreters and mention it on grugbrain:
but the visitor pattern is nearly always a bad idea IMO: you should just encode the operation in the tree if you control it or create a recursive function that manually dispatches on the argument type if you don't
Thank you for those links. The first one is especially clear.
However, this is just not something that I typically perceive as a problem. For example in the book that I mentioned above, I didn't feel the need to use it at all. I just added the fields or the functions that were required.
In the first link you provided, the OCaml code seems to use unions as well (I don't know the language). I assume OCaml checks for exhaustive matching, so it seems extremely straight forward to extend this code.
On the other hand I have absolutely no issues with a big switch case in a more simple language. I just had a look at the code I wrote quite a while ago and it looks fine.
12_throw_away 4 hours ago [-]
As far as I understand it, the limited circumstances when you absolutely need the visitor pattern are when you have type erasure, i.e., can't use a tagged union or its equivalent? In that case visitors are AIUI a very clever trick to use vtables or whatever to get back to your concrete types! but ... clever tricks make grug angry.
zem 3 hours ago [-]
even when you have tagged unions, visitors are a useful way to abstract a heterogenous tree traversal from code that processes specific nodes in the tree. e.g. if you have an ast with an `if` node and subnodes `condition`, `if_body`, and `else_body` you could either have the `if node == "if" then call f(subnode) for subnode in [node.condition, node.if_body, node.else_body]` and repeat that for every function `f` that walks the tree, or define a visitor that takes `f` as an argument and keep the knowledge of which subnodes every node has in a single place.
tayo42 4 hours ago [-]
What do you mean by tagged union? And how does it make the visitor pattern not needed?
In languages influenced by ML (like contemporary Java!) it is common in compiler work in that you might have an AST or similar kind of structure and you end up writing a lot of functions that use pattern matching like
to implement various "functions" such as rewriting the AST into bytecode, building a symbol table, or something. In some cases you could turn this inside out and put a bunch of methods on a bunch of classes that do various things for each kind of node but if you use pattern matching you can neatly group together all the code that does the same thing to all the different objects rather than forcing that code to be spread out on a bunch of different objects.
tayo42 1 hours ago [-]
OK yeah I see, that's natural to do with like rust enums
Java doesn't support this though I thought?
PaulHoule 56 minutes ago [-]
Currently Java supports records (finalized in JDK 16) and sealed classes (JDK 17) which together work as algebraic data types; pattern matching in switch was finalized in JDK 21. The syntax is pretty sweet
I care about naming, and I find the name of the visitor pattern infuriatingly bad. Very clubbable. I think I have never created one called "Visitor" in my life.
Given the syntax tree example from Wikipedia, I think I'd call it AstWalker, AstItem::dispatch(AstWalker) and AstWalker::process(AstItem) instead of Visitor, AstItem::accept(AstVisitor) and AstVisitor::visit(AstItem).
"The walker walks the AST, each items sends it to the next ones, and the walker processes them". That means something. "The visitor visits the AST items, which accept it" means basically nothing. It's more general, but also contains very little useful information. So the visitor might need different names in different situations. Fine. Just add a comment "visitor pattern" for recognizability.
I remember a situation where I needed to walk two object trees for a data comparison and import operation. I created an AbstractImporter that walked the two trees in lockstep in a guaranteed order and invoked virtual methods for each difference. It had a non-virtual doImport() for the ordered data walk, and doImport() called virtual methods like importUserAccount(), importUserAccountGrouMemberships() etc. There were two subclasses of AbstractImporter: ImportAnalyzer collected differences to display them, then there was a selection step implemented by a kind of list model + a bit of controller logic, then an ImportWorker to make the selected changes. All rather specific terminology and not exactly the visitor pattern.
tempaway43563 5 hours ago [-]
I went to look for that bit. It said:
"Bad"
lol
magarnicle 4 hours ago [-]
I know, I get it, but I've realised that I'm not actually grug-brained. The way my brain works, I remember things pretty well; I like to get into the details of systems. So if more complexity in the code means the app can do more or a task is automated away I'll make the change and know I'll be able to remember how it works in the future.
This doesn't mean OP is bad advice, just make a conscious decision about what to do with complexity and understand the implications.
anonymars 37 minutes ago [-]
What about the rest of your team?
pixelatedindex 3 hours ago [-]
The knowing how it works in the future should really just be comments, right? And if it’s a bit more complex, perhaps a markdown file in a docs folder or stuffed in a README? When working with a large enough organization, tribal knowledge is an invisible t-rex
magarnicle 2 hours ago [-]
I don't think comments can capture the complexity of everything - there's too much interaction between systems to explain it all. I'm probably unique here in that my tribe is just one person: I wouldn't recommend adopting a pet t-rex in a team.
OMG is that the technical name for my development style? I'm not like super deep in technobabble since there are so many coined names and references that it is nearly impossible to assign the correct one.
Grug brained dev I am I guess.
xupybd 30 minutes ago [-]
The guy that wrote this is a raging schizophrenic that argues with his own alt on X
layoric 5 hours ago [-]
Very entertaining (and enlightening) read. I love the 'reach for club' visuals, made me laugh out loud a few times.
alerter 6 hours ago [-]
Probably my single favourite programming article.
JohnScolaro 6 hours ago [-]
This was shared with me years ago by another developer I worked with. I still reference it today as I continue my external battle with the complexity demon.
skippyboxedhero 5 hours ago [-]
trap in crystal
dekhn 6 hours ago [-]
It took me decades to learn these lessons on my own.
many, many shiney rock lost to agile shaman!
replete 5 hours ago [-]
grug read words and move head up down lot
grug make other work grugs read this after yellow circle arrive next
grug thank clever grug
vonnik 5 hours ago [-]
I've been fooling around with applying grugspeak to famous essays on tech.
it's the same content + an index, so not worth buying unless you want the hard copy, but maybe a good convo-starter for the ol'dev team
jonathan-adly 4 hours ago [-]
I send this article as part of onboarding for all new devs we hire. It is super great to keep a fast growing team from falling into the typical cycle of more people, more complexity.
ysofunny 5 hours ago [-]
this very cool
I smell a formal grammar behind dumbiffied grug english.
nonetheless, I think that when it says:
> so grug say again and say often: complexity very, very bad
at the end of that section, it shoulud say instead:
> so grug say again and say often: complexity very, very, very bad
this disambiguates 3 instances of same concept/idea AND, even better, showcases 3 values of increasing strength like for warning, error, critical use. most compact.
end of groog
devrandoom 5 hours ago [-]
This will be discussed in the next standup and everyone has to have an opinion. We'll need approval from legal and that takes at least a week so we want to minimise ping pong emails.
But it should be fairly quick, expect an updated version around end of summer or just after.
AutistiCoder 5 hours ago [-]
Grug user want keyword research tool.
Grug user find program that does that and more.
Grug user confused by menu.
Grug user wish tool only did keyword research.
ednite 6 hours ago [-]
Some solid nuggets here. Totally agree on keeping it simple and not rushing. I’ve rushed things before to meet unrealistic deadlines, resulting in bad first impression. Took a step back, simplified, and let the design emerge. Ended up with something users actually loved. Thanks for sharing.
sammy0910 57 minutes ago [-]
this is a masterpiece of writing -- well done
tartoran 5 hours ago [-]
This has to be adapted to the LLM era as well.
ethan_smith 1 hours ago [-]
LLMs actually reinforce grug principles - they work best with simple, consistent patterns and struggle with the same complexity demons that confuse humans.
tptacek 5 hours ago [-]
In my experience, LLM agents are pretty Grug-brained.
Cerium 5 hours ago [-]
They might be grug brained but they act big brained. Very clubable.
tptacek 3 hours ago [-]
That's one of their great charms, because you don't have to feel bad when you club them.
Night_Thastus 5 hours ago [-]
Grug has common sense. LLMs don't even have that.
culebron21 5 hours ago [-]
I read it back then and then forgot the word and couldn't find it with search. LOL. Thanks for reposting!
In what way would you say they are related? I've read both but don't see it
shadowgovt 5 hours ago [-]
The anecdote about rob pike and logging made me chuckle.
Fun fact about Google: logging is like 95% of the job, easily... From tracking everything every service is doing all the time to wrangling the incoming raw crawl data, it's all going through some kind of logging infrastructure.
I was there when they actually ran themselves out of integers; one of their core pieces of logging infrastructure used a protocol buffer to track datatypes of logged information. Since each field in a protocol buffer message is tagged with an integer key, they hit the problem when their top-level message bumped up against the (if memory serves) int16 implementation limit on maximum tag ID and had to scramble to fix it.
jrodewig 5 hours ago [-]
This is one of my favorite pieces of non-fiction. No sarcasm.
chris_wot 4 hours ago [-]
I've always wondered at the best way of doing integration tests. There is a lot of material on unit tests, but not so much on integration tests. Does anyone know of a good book on the subject?
2OEH8eoCRo0 2 hours ago [-]
I still call coffee "black think juice"
jongjong 3 hours ago [-]
I used to be against complexity and worried about such narratives making fun of people who tried to avoid it but now I'm grateful. If software developers didn't have such strong biases in favor of complexity, LLMs would probably be producing really high quality code and have replaced us all by now... Instead, because the average code online is over-engineered, un-reusable junk, their training set is a mess and hence they can only produce overengineered junk code. Also, this may provide long term job safety since now LLMs are producing more and more code online, further soiling the training set.
joeevans1000 4 hours ago [-]
htmx.
and clojure.
mmmmm.
ChrisArchitect 6 hours ago [-]
might be some good points in here but it's sooo hard to read.
(no affiliation, I enjoy the original and wish for it to reach as many people as possible)
ern 2 hours ago [-]
Good one....I put it into Claude with a prompt to not change the meaning but make it more normal
sodapopcan 6 hours ago [-]
RIP phone readers.
graypegg 6 hours ago [-]
Works great in reader mode! Better than most actually.
anthomtb 6 hours ago [-]
I like it. Grug is grammatically incorrect but concise, which forces my Big Brain to step back, allowing my Grug Brain to slowly absorb the meaning of each word.
It isn't as skimmable as some other writing styles, but if you read it one word at a time (either aloud or "in your head"), it's not too bad.
kunzhi 5 hours ago [-]
Sometimes if I'm reading something and having trouble with the words or sentences, I'll slow down and focus on the individual letters. Usually helps a tremendous amount.
fwip 5 hours ago [-]
Apologies if I came across as condescending.
IshKebab 5 hours ago [-]
Since you're being downvoted I just wanted to say I agree. I'm sure it was cathartic to write but it's not a good way to actually communicate.
Also like a lot of programming advice it isn't actually that useful. Advice like "avoid complexity" sounds like it is good advice, but it isn't good advice. Of course you should avoid complexity. Telling people to do that is about as useful as telling people to "be more confident".
We mostly learn to avoid complexity through trial and error - working on complex and simple systems, seeing the pitfalls, specific techniques to avoid complexity, what specific complexity is bad, etc. Because not all complexity is bad. You want simplicity? Better trade in you Zen 4 and buy a Cortex M0. And I hope you aren't running a modern OS on it.
Ok "avoid unnecessary complexity"? Great how exactly do you know what's unnecessary? Years of experience that's how. Nothing you can distill to a gimmicky essay.
yawaramin 4 hours ago [-]
Yeah that's the point, to communicate the idea that some complexity is unnecessary, and we should beware of it, instead of just accepting wholesale whatever complexity is handed to us, like many in this industry do.
PaulHoule 6 hours ago [-]
Content 1, Style 0
Thinking you are too smart leads to all sorts of trouble, like using C++ and being proud of it.
If you think your intelligence is a limited resource however you'll conserve it and not waste it on tools, process and the wrong sort of design.
idlewords 6 hours ago [-]
The style is the most charming part of this essay.
parpfish 5 hours ago [-]
i think the parent is agreeing with the grug article by saying "content wins over style", not giving the style of the article a score of 0
devrandoom 5 hours ago [-]
C++ called and filed a complaint about receiving a haymaker of a suckerpunch out of nowhere.
flkenosad 4 hours ago [-]
Honestly, burn.
guywithahat 6 hours ago [-]
It would be really embarrassing to use one of the most popular, time-tested languages.
Even if we decided to use Zig for everything, hiring for less popular languages like Zig, lua, or Rust is significantly harder. There are no developers with 20 years experience in Zig
juliangmp 5 hours ago [-]
You don't need developers with 20 years of experience in a specific language.
Any decent engineer must be able to work with other languages and tools.
What you're looking for is someone with experience building systems in your area of expertise.
And even then, experience is often a poor substitute for competence.
hiimkeks 5 hours ago [-]
> You don't need developers with 20 years of experience in a specific language.
You may in trivia quiz languages that have more features than anyone can learn in a lifetime
shadowgovt 5 hours ago [-]
Being at a firm where the decision to use C++ was made, the thought process went something like this:
"We're going to need to fit parts of this into very constrained architectures."
"Right, so we need a language that compiles directly to machine code with no runtime interpretation."
"Which one should we use?"
"What about Rust?"
"I know zero Rust developers."
"What about C++?"
"I know twenty C++ developers and am confident we can hire three of them tomorrow."
The calculus at the corporate level really isn't more complicated than that. And the thing about twenty C++ developers is that they're very good at using the tools to stamp the undefined behavior out of the system because they've been doing it their entire careers.
guywithahat 5 hours ago [-]
People sometimes forget we're not just trying to use the shiniest tool for fun, we're trying to build something with deadlines that must be profitable. If you want to hire for large teams or do hard things that require software support, you often have to use a popular language like C++.
kragen 5 hours ago [-]
How does someone know twenty C++ developers and zero C developers though?
frollogaston 2 hours ago [-]
Probably they know C but the project is complex enough to warrant something else. Personally I'd rather C++ not exist and it's just C and Rust, but I don't have a magic wand.
shadowgovt 49 minutes ago [-]
A lot of fintech. Bloomberg is real into C++.
flkenosad 4 hours ago [-]
Born in the 80s.
Jtsummers 4 hours ago [-]
That wouldn't stop someone from knowing any C developers. It's still a common language today, and was more common when those 80s kids would have become adults and entered the industry.
PaulHoule 4 hours ago [-]
As a kid in the 1980s I thought something was a bit off about K&R, kind of a discontinuity. Notably C succeeded where PL/I failed but by 1990 or so you started to see actual specs written by adults such as Common Lisp and Java where you really can start at the beginning and work to the end and not have to skip forward or read the spec twice. That discontinuity is structural though to C and also C++ and you find it in most books about C++ and in little weird anomalies like the way typedefs force the parser to have access to the symbol table.
Sure C was a huge advance in portability but C and C++ represent a transitional form between an age where you could cleanly spec a special purpose language like COBOL or FORTRAN but not quite spec a general systems programming language and one in which you could. C++, thus, piles a huge amount of complexity on top of a foundation which is almost but not quite right.
kragen 4 hours ago [-]
Maybe they use only Microsoft Windows?
WD-42 3 hours ago [-]
And none of those 20 C++ developers can learn rust? What’s wrong with them?
spauldo 2 minutes ago [-]
They're probably busy writing code for a living.
PaulHoule 2 hours ago [-]
Personally I think Rust is better thought out than C++ but that I've got better things to do than fight with the borrow checker and I appreciate that the garbage collector in Java can handle complexity so I don't have to.
I think it's still little appreciated how revolutionary garbage collection is. You don't have maven or cargo for C because you can't really smack together arbitrary C libraries together unless the libraries have an impoverished API when it comes to memory management. In general if you care about performance you would want to pass a library a buffer from the application in some cases, or you might want to pass the library custom malloc and free functions. If your API is not impoverished the library can never really know if the application is done with the buffer and the application can't know if the library is done. But the garbage collector knows!
It is painful to see Rustifarians pushing bubbles around under the rug when the real message the borrow checker is trying to tell them is that their application has a garbage-collector shaped hole in it. "RC all the things" is an answer to reuse but if you are going to do that why not just "GC all the things?" There's nothing more painful than watching people struggle with async Rust because async is all about going off the stack and onto the heap and once you do that you go from a borrowing model that is simple and correct to one that is fraught and structurally unstable -- but people are so proud of their ability to fight complexity they can't see it.
shadowgovt 43 minutes ago [-]
In our case, a garbage collector is a non-starter because it can't make enough guarantees about either constraining space or time to make the suits happy (embedded architecture with some pretty strict constraints on memory and time to execute for safety).
I do think that there are a lot of circumstances a garbage collector is the right answer where people, for whatever reason, decide they want to manage memory themselves instead.
frollogaston 2 hours ago [-]
They can, but why pay 20 people to learn Rust?
shadowgovt 46 minutes ago [-]
Flip the question around: what is the benefit when they already know C++? Most of the safety promises one could make with Rust they can already give through proper application of sanitizers and tooling. At least they believe they can, and management believes them. Grug not ask too many questions when the working prototype is already sitting on Grug's desk because someone hacked it together last night instead of spending that time learning a new language.
I suspect that in a generation or so Rust will probably be where C++ is now: the language business uses because they can quickly find 20 developers who have a career in it.
I’ve spent time at small startups and on “elite” big tech teams, and I’m usually the only one on my team using a debugger. Almost everyone in the real world (at least in web tech) seems to do print statement debugging. I have tried and failed to get others interested in using my workflow.
I generally agree that it’s the best way to start understanding a system. Breaking on an interesting line of code during a test run and studying the call stack that got me there is infinitely easier than trying to run the code forwards in my head.
Young grugs: learning this skill is a minor superpower. Take the time to get it working on your codebase, if you can.
> As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places. Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.
I tend to agree with them on this. For almost all of the work that I do, this hypothesis-logs-exec loop gets me to the answer substantially faster. I'm not "trying to run the code forwards in my head". I already have a working model for the way that the code runs, I know what output I expect to see if the program is behaving according to that model, and I can usually quickly intuit what is actually happening based on the incorrect output from the prints.
[0] The unreasonable effectiveness of print debugging (349 points, 354 comments) April 2021 https://news.ycombinator.com/item?id=26925570
He said that him and Brian K would pair while debugging. As Rob Pike told it, he would often drive the computer, putting in print statements, rerunning the program and so on. Brian Kernighan would stand behind him and quietly just think about the bug and the output the program was generating. Apparently Brian K would often just - after being silent for awhile - say "oh, I think the bug is in this function, on this line" and sure enough, there it was. Apparently it happened so often enough that he thought Brian might have figured out more bugs than Rob did, even without his hands touching the keyboard.
Personally I love a good debugger. But I still think about that from time to time. There's a good chance I should step away from the computer more often and just contemplate it.
I insist that my students learn a visual debugger in my classes for this reason: what the "stack" really is, how a loop really executes, etc.
It doesn't replace thinking & print debugging, but it complements them both when done properly.
It's useful for a beginner e.g in a for loop to see how `i` changes at the end of the loop. And similarly with return values of functions and so on.
Hard to get wrong, tedious to type and a huge speed increase to visually scan the output.
Most of us aren't Brian Kernighan or Rob Pike.
I am very happy for people who are, but I am firmly at a grug level.
A point that may be pedantic: I don't add (and then remove) "print" statements. I add logging code, that stays forever. For a major interface, I'll usually start with INFO level debugging, to document function entry/exit, with param values. I add more detailed logging as I start to use the system and find out what needs extra scrutiny. This approach is very easy to get started with and maintain, and provides powerful insight into problems as they arise.
I also put a lot of work into formatting log statements. I once worked on a distributed system, and getting the prefix of each log statement exactly right was very useful -- node id, pid, timestamp, all of it fixed width. I could download logs from across the cluster, sort, and have a single file that interleaved actions from across the cluster.
This is an anti-pattern which results in voluminous log "noise" when the system operates as expected. To the degree that I have personally seen gigabytes per day produced by employing it. It also can litter the solution with transient concerns once thought important and are no longer relevant.
If detailed method invocation history is a requirement, consider using the Writer Monad[0] and only emitting log entries when either an error is detected or in an "unconditionally emit trace logs" environment (such as local unit/integration tests).
0 - https://williamyaoh.com/posts/2020-07-26-deriving-writer-mon...
Some IDEs promise to solve that, but I’ve not been impressed thus far.
YMMV based on language/runtime/toolkit of course. This might get added to my wishlist for my next language of choice.
I can always drop an entire state object into the log if I need it, but the only way for a debugger to approximate what a log can give me is for me to step through a bunch of break points and hold the time stream in my head.
The one place where a debugger is straight up better is if I know exactly which unit of code is failing and that unit has complicated logic that is worth stepping through line by line. That's what they were designed for, and they're very useful for that, but it's also not the most common kind of troubleshooting I run into.
One of the first things I do in a codebase is get some working IDE/editor up where I can quickly run the program under a debugger, even if I'm not immediately troubleshooting something. It's never long before I need to use it.
I was baffled when I too encountered this. Even working collaboratively with people they'd have no concept of how to use a debugger.
"No, set a breakpoint there"
"yeah now step into the function and inspect the state of those variables"
"step over that"
: blank stares at each instance :
I've never worked somewhere where logging is taken seriously. Like, our AWS systems produce logs and they get collected somewhere, but none of our code ever does any serious logging.
If people like print-statement debugging so much, then double down on it and do it right, with a proper logging framework and putting quality debug statements into all code.
What sold me on debugger is the things you can do with it
One other such tool is REPL. I see REPL and debugger as complementary to each other, and have some success using both together in VSCode, which is pretty convenient with autoreload set. (https://mahesh-hegde.github.io/posts/vscode-ipython-debuggin...)Not any excuse, but another factor to be considered when adding infra layers between the developer and the application.
Debuggers actually can hide entire categories of bugs caused by race conditions when breakpoints cause async functions to resolve in a different order than they would when running in realtime
I didn't really get into debuggers until (1) I was firmly in Windows, where you expect the GUI to work and the LI to be busted, and (2) I'd been burned too many times by adding debugging printfs() that got checked into version control and caused trouble.
Since then I've had some adventures with CLI debuggers, such as using gdb to debug another gdb, using both jdb and gdb on the same process at the same time to debug a Java/C++ system, automating gdb, etc. But there is the thing, as you say, is that there is usually some investment required to get the debugger working for a particular system.
With a good IDE I think Junit + debugging gives an experience in Java similar to using the REPL in a language like Python in that you can write some code that is experimental and experiment it, but in this case the code doesn't just scroll out of the terminal but ultimately gets checked in as a unit test.
Wtaf?
I only used debugger recently in C# and C, when I was learning and practicing them.
My workflow is usually:
1. insert a breakpoint on some code that I’m trying to understand
2. attach the debugger and run any tests that I expect to exercise that code
3. walk up and down the call stack, stepping occasionally, reading the code and inspecting the local variables at each level to understand how the hell this thing works and why it’s gone horribly wrong this time.
4. use my new understanding to set new, more relevant breakpoints; repeat 2-4.
Sometimes I fiddle with local variables to force different states and see what happens, but I consider this advanced usage, and anyway it often doesn’t work too well on my current codebase.
I've often felt that I should, but never enough to actually learn how
If there isn't one, I'd rather use a language with a debugger and write a thousand lines of code than 100 lines of code in a language I'm going to have to black box.
I've seen your work in Hotwire in my role as a Staff Ruby on Rails Engineer. It's the coolest thing to see you pop up in Hacker News every now and then and also see you talking with the Hotwire devs in GitHub.
Thanks for being a light in the programming community. You're greatly respected and appreciated.
https://htmx.org/essays/htmx-sucks/
get the mug!
https://swag.htmx.org/products/htmx-sucks-mug
https://htmx.org/essays/when-to-use-hypermedia/
https://htmx.org/essays/#on-the-other-hand
It's very effective, simple and expressive to work this way, as long as you keep in mind that some client side rendering is fine.
There are a few bits I don't like about it, like defaulting to swap innerHTML instead of outerHTML, not swapping HTML when the status code isn't 200-299 by default and it has some features that I avoid, like inline JSON on buttons instead of just using forms.
Other than that, it's great. I can also recommend reading the book https://hypermedia.systems/.
grug wonder why big brain take hardest problem, factoring system correctly, and introduce network call too
My grug brain conclusion: Grug see good microservice in many valley. Grug see grug tribe carry good microservice home and roast on spit. Grug taste good microservice, many time. Shaman tell of good monolith in vision. Grug also dream of good monolith. Maybe grug taste good monolith after die. Grug go hunt good microservice now.
But you are right that the remote procedure call is a fraught concept for more reasons than one. On one hand there is the fundamental difference between a local procedure call that takes a few ns and a remote call which might take 1,000,000 longer. There's also the fact that most RPC mechanisms that call themselves RPC mechanisms are terribly complicated, like DCOM or the old Sun RPC. In some sense RPC became mainstream once people started pretending it was REST. People say it is not RPC but often you have a function in your front end Javascript like fetch_data(75) and that becomes GET /data/75 and your back end JAXB looks like
I think this is correct as an explanation for the phenomenon, but it's not just a false perception on their part: for a lot of organizations it is actually true that the only way to preserve boundaries between systems over the course of years is to stick the network in between. Without a network layer enforcing module boundaries code does, in fact, tend to morph into a big ball of mud.
I blame a few things for this:
1. Developers almost universally lack discipline.
2. Most programming languages are not designed to sufficiently account for #1.
It's not a coincidence that microservices became popular shortly after Node.js and Python became the dominant web backend languages. A strong static type system is generally necessary (but not sufficient) to create clear boundaries between modules, and both Python and JavaScript have historically been even worse than usual for dynamic languages when it comes to having a strong modularity story.
And while Python and JS have it worse than most, even most of our popular static languages are pretty lousy at giving developers the tools needed to clearly delineate module boundaries. Rust has a pretty decent starting point but it too could stand to be improved.
Many companies don't seem to do a good job coordinating between teams. Different teams have different incentives and priorities. If group A needs fixes/work from group B and B has been given some other priority, group A is stuck.
By putting a network between modules different groups can limit blast damage from other teams' modules and more clearly show ownership when things go wrong. If group A's project fails because of B's module it still looks like A's code has the problem.
Upper management rarely cares about nuance. They want to assign blame, especially if it's in another team or department. So teams under them always want clear boundaries of responsibility so they don't get thrown under the bus.
The root cause of a lot of software problems is the organization that produces it more than any individual or even team working on it.
Now I understand that “architecture” and “patterns” is a jobs program for useless developers. It’s this, or they’d be on the streets holding a sign saying “will write JavaScript for a sandwich”.
One app I got brought in late on the architect had done some complicated mediator pattern for saving data with a micro service architecture. They'd also semi-implemented DDD.
It was a ten page form. Literally that was what it was supposed to replace. An existing paper, 10 page, form. One of those "domains" was a list of the 1,000 schools in the country. That needed to be updated once a year.
A government spent millions on this thing.
I could have done it on my todd in 3 months. It just needed to use simple forms, with some simple client side logic for hiding sections, and save the data with an ORM.
The funniest bit was when I said that it couldn't handle the load because the architecture had obvious bottlenecks. The load was known and fairly trivial (100k form submissions in one month).
The architect claimed that it wasn't possible as the architecture was all checked and approved by one of the big 5.
So I brought the test server down during the call by making 10 requests at once.
This is the real, actual conversation to be had about "AI taking jobs."
I've seen similar things a lot in the private sector.
There's just loads of people just flailing around doing stuff without really having any expertise other than some vague proxy of years of experience.
It's really not even exactly their fault (people have lives that don't revolve around messing about with software systems design, sure, and there's no good exposure to anything outside of these messes in their workplaces).
But, outside of major software firms (think banks, and other non-"tech" F500s; speaking from experience here) there's loads of people that are "Enterprise Architects" or something that basically spend 5 hours a day in meetings and write 11 lines of C# or something a quarter and then just adopt ideas they heard from someone else a few years back.
Software is really an utterly bizarre field where there's really nothing that even acts as valuable credentials or experience without complete understanding of what that "experience" is actually comprised of. I think about this a lot.
Back in the very early 2000s I got sent to "tune IIS performance" at a 100-developer ISV working on a huge government project.
They showed me that pressing the form submit button on just two PCs at once had "bad performance".
No, not it didn't. One was fast[1], the other took 60 seconds almost exactly. "That's a timeout on a lock or something similar", I told them.
They then showed me their 16-socket database server that must have cost them millions and with a straight face asked me if I thought that they needed to upgrade it to get more capacity. Upgrade to what!? That was the biggest machine I have ever seen! I've never in the quarter century since then seen anything that size with my own two eyes. I don't believe bigger Wintel boxes have ever been made.
I then asked their database developers how they're doing transactions and whether they're using stored procedures or not.
One "senior" database developer asked me what a stored procedure is.
The other "senior" database developer asked me what a transaction is.
"Oh boy..."
[1] Well no, not really, it took about a second, which was long enough for a human button press to to "overlap" the two transactions in time. That was a whole other horror story of ODBC connection pooling left off and one-second sleeps in loops to "fix" concurrency issues.
I'm curious what role you have where you're doing this repeatedly
The agencies are similarly uncoordinated and will pick up their metaphorical credit card and just throw it at random small dev teams, internally, external, or a mix.
Those people will happily take the credit! The money just… disappears. It’s like a magic trick, or one of those street urchins that rips you off when you’re on holiday in some backwards part of the world like Paris.
I get brought in as “the cloud consultant” for a week or two at the end to deploy the latest ball of mud with live wires sticking out of it to production.
This invariably becomes an argument because the ball of mud the street urchins have sold to the customer is not fit for… anything… certainly not for handling PII or money, but they spent the budget and the status reports were all green ticks for years.
Fundamentally, the issue is that they're "going into the cloud" with platform as a service, IaC, and everything, but at some level they don't fully grok what that means and the type of oversight required to make that work at a reasonable cost.
"But the nice sales person from Microsoft assured me the cloud is cheaper!"
In fact we're going through one of these SAP HANA migrations at present and it's very broken, because the prime contractor has delivered a big ball of mud with lots of internal microservices.
Yet, developers are always using patterns and are thinking about architecture.
Here you are doing so too, a pattern, "form submission" and an architecture, "request-response".
- Cannot be run without an orchestrator like K8S, which is a bear to install and maintain, which helps sell managed cloud.
- Uses more network bandwidth, which they bill for, and CPU, which they bill for.
- Makes it hard to share and maintain complex or large state within the application, encouraging the use of more managed database and event queue services as a substitute, which they bill for. (Example: a monolith can use a queue or a channel, while for microservices you’re going to want Kafka or some other beast.)
- Can’t be run locally easily, meaning you need dev environments in cloud, which means more cloud costs. You might even need multiple dev and test environments. That’s even more cloud cost.
- Tends to become dependent on the peculiarities of a given cloud host, such as how they do networking, increasing cloud lock in.
Anyone else remember how cloud was pitched as saving money on IT? That was hilarious. Knew it was BS way back in the 2000s and that it would eventually end up making everything cost more.
Don't forget various pipelines, IaC, pipelines for deploying IaC, test/dev/staging/whatever environments, organization permissions strategies etc etc...
When I worked at a large, uh, cloud company as a consultant, solutions were often tailored towards "best practices"--this meant, in reality, large complex serverless/containerized things with all sorts of integrations for monitoring, logging, NoSQL, queues etc, often for dinky little things that an RPI running RoR or NodeJS could serve without breaking a sweat.
With rare exceptions, we'd never be able to say, deploy a simple go server on a VM with server-side rendered templates behind a load balancer with some auto-scaling and a managed database. Far too pedestrian.
Sure, it's "best practices" for "high-availability" but was almost always overkill and a nightmare to troubleshoot.
It’s hard to hire people to do anything else. People don’t know how to admin machines so forget bare metal even though it can be thousands of times cheaper for some work loads (especially bandwidth).
You’re not exaggerating with a raspberry pi. Not at all.
I think about this line at least once a week
Tyrannosaurs were built for speed. Their leg bones were hollow, like a bird’s. If he could break a femur …
The shovel connected, but not solidly. It hit without breaking anything. But, still, it got tangled up in those powerful legs. With enormous force, it was wrenched out of his hands. Leyster was sent tumbling on the ground.
Somebody was screaming. Dazed, Leyster raised himself up on his arms to see Patrick, hysterically slamming the juvenile, over and over, with the butt of the shotgun. He didn’t seem to be having much effect. Scarface was clumsily trying to struggle to its feet. It seemed not so much angry as bewildered by what was happening to it.
Then, out of nowhere, Tamara was standing in front of the monster. She looked like a warrior goddess, all rage and purpose. Her spear was raised up high above Scarface, gripped tightly in both hands. Her knuckles were white.
With all her strength, she drove the spear down through the center of the tyrannosaur’s face. It spasmed, and died. Suddenly everything was very still."
this grug keeps one on one invisible t-rex, grug cursed
There is of course a time and place for sophistication, pushing for higher levels of abstraction and so on. But this grug philosophy is saying that there isn't any inherent value in doing this sort of thing and I think that is very sound advice.
Also I noticed AI assistance is more effective with consistent, mundane and data driven code. YMMV
Novice dev writes simple code
Intermediate dev writes complex code
Expert dev writes simple code
Three examples:
DRY (Don't Repeat Yourself) sometimes leads to premature abstraction. We think, "hey, I bet this pattern will get used elsewhere, so we need to abstract out the common parts of the pattern and then..." And that's when the Complexity Demon enters.
We want as many bugs as possible caught at compile-time. But that means the compiler needs to know more and more about what we're actually trying to do, so we come up with increasingly complex types which tax your ability to understand.
To avoid boilerplate we create complex macros or entire DSLs to reduce typing. Unfortunately, the Law of Leaky Abstractions means that when we actually need to know the underlying implementation, our head explodes.
Our challenge is that each of these examples is sometimes a good idea. But not always. Being able to decide when to introduce complexity to simplify things is, IMHO, the mark of a good software engineer.
To modulate DRY, I try to emphasize the “rule of three”: up to three duplicates of some copy/paste code is fine, and after that we should think about abstracting.
Of course no rule of thumb applies in all cases, and the sense for that is hard to teach.
Student: I notice that you duplicated code here rather than creating an abstraction for both.
Master: That is correct.
Student: But what if you need to change the code in the future?
Master: Then I will change it in the future.
At that point the student became enlightened.
Usually I make web applications using Sinatra-like frameworks like Flask or JAXB where I write a function that answers URLs that match a pattern and a "screen" is one or more of those functions that work together and maybe some HTML templates that go with them. For instance there might be a URL for a web page that shows data about a user, and another URL that HTMX calls when you flip a <select> to change the status of that user.
Assuming the "application" has the stuff to configure the database connection and file locations and draw HTML headers and footers and such, there is otherwise little coupling between the screens so if you want to make a new screen you can cut and paste an old screen and modify it, or you can ask an LLM to make you a screen or endpoint and if it "vibe coded" you a bad screen you can just try again to make another screen. It can make sense to use inheritance or composition to make a screen that can be specialized, or to write screens that are standalone (other than fetching the db connection and such.)
The origin story was that I was working on a framework for making ML training sets called "Themis" that was using microservices, React, Docker and such. The real requirement was that we were (1) always adding new tasks, and (2) having to create simple but always optimized "screens" for those tasks because if you are making 20,000 judgements it is bad enough to click 20,000 times, if you have to click 4x for each one and it adds up to 80,000 you will probably give up. As it was written you had to write a bunch of API endpoints as part of a JAXB application and React components that were part of a monolithic React app and wait 20 minutes for typescript and Docker and javac to do their things and if you are lucky it boots up otherwise you have to start over.
I wrote up a criticism of Themis and designed "Nemesis" that was designed for rapid development of new tasks and it was a path not taken at the old job, but Nemesis and I have been chewing through millions of instances of tasks ever since.
I also recoiled at the complexity of React, Docker, etc. and went a different path: I basically moved all the code to the server and added a way to "project" the UI to the browser. From the program's perspective, you think you're just showing GUI controls on a local screen. There is no client/server split. Under the covers, the platform talks to some JavaScript on the browser to render the controls.
This works well for me since I grew up programming on Windows PCs, where you have full control over the machine. Check it out if you're interested: https://gridwhale.com.
I think pushing code to the server via HTMX and treating the browser like a dumb terminal has the same kind of advantage: you only have to worry about one system.
Fundamentally, IMHO, the client/server split is where all the complexity happens. If you're writing two programs, on the client and one on the server, you're basically creating a distributed system, which we know is very hard.
https://craftinginterpreters.com/representing-code.html#the-...
I remember reading through it and not understanding why it had to be this complicated and then just used a tagged union instead.
Maybe I'm too stupid for OO. But I think that's kind of the point of the grug article as well. Why burden ourselves with indirection and complexity when there's a more straight forward way?
https://grugbrain.dev/#grug-on-parsing
but the visitor pattern is nearly always a bad idea IMO: you should just encode the operation in the tree if you control it or create a recursive function that manually dispatches on the argument type if you don't
https://prog2.de/book/sec-java-expr-problem.html - Not the writeup I was looking for but seems to cover it well.
> Why burden ourselves with indirection and complexity when there's a more straight forward way?
Because each way has its own tradeoffs that make it more or less difficult to use in particular circumstances.
https://homepages.inf.ed.ac.uk/wadler/papers/expression/expr... - Wadler's description of the expression problem.
However, this is just not something that I typically perceive as a problem. For example in the book that I mentioned above, I didn't feel the need to use it at all. I just added the fields or the functions that were required.
In the first link you provided, the OCaml code seems to use unions as well (I don't know the language). I assume OCaml checks for exhaustive matching, so it seems extremely straight forward to extend this code.
On the other hand I have absolutely no issues with a big switch case in a more simple language. I just had a look at the code I wrote quite a while ago and it looks fine.
In languages influenced by ML (like contemporary Java!) it is common in compiler work in that you might have an AST or similar kind of structure and you end up writing a lot of functions that use pattern matching like
to implement various "functions" such as rewriting the AST into bytecode, building a symbol table, or something. In some cases you could turn this inside out and put a bunch of methods on a bunch of classes that do various things for each kind of node but if you use pattern matching you can neatly group together all the code that does the same thing to all the different objects rather than forcing that code to be spread out on a bunch of different objects.Java doesn't support this though I thought?
https://blog.scottlogic.com/2025/01/20/algebraic-data-types-...
Given the syntax tree example from Wikipedia, I think I'd call it AstWalker, AstItem::dispatch(AstWalker) and AstWalker::process(AstItem) instead of Visitor, AstItem::accept(AstVisitor) and AstVisitor::visit(AstItem).
"The walker walks the AST, each items sends it to the next ones, and the walker processes them". That means something. "The visitor visits the AST items, which accept it" means basically nothing. It's more general, but also contains very little useful information. So the visitor might need different names in different situations. Fine. Just add a comment "visitor pattern" for recognizability.
I remember a situation where I needed to walk two object trees for a data comparison and import operation. I created an AbstractImporter that walked the two trees in lockstep in a guaranteed order and invoked virtual methods for each difference. It had a non-virtual doImport() for the ordered data walk, and doImport() called virtual methods like importUserAccount(), importUserAccountGrouMemberships() etc. There were two subclasses of AbstractImporter: ImportAnalyzer collected differences to display them, then there was a selection step implemented by a kind of list model + a bit of controller logic, then an ImportWorker to make the selected changes. All rather specific terminology and not exactly the visitor pattern.
"Bad"
lol
This doesn't mean OP is bad advice, just make a conscious decision about what to do with complexity and understand the implications.
The Grug Brained Developer (2022) - https://news.ycombinator.com/item?id=38076886 - Oct 2023 (192 comments)
The Grug Brained Developer - https://news.ycombinator.com/item?id=31840331 - June 2022 (374 comments)
Grug brained dev I am I guess.
many, many shiney rock lost to agile shaman!
grug make other work grugs read this after yellow circle arrive next
grug thank clever grug
https://docs.google.com/document/d/1emldq9MovfYshOSkM9rhRUcl...
not as good as the original, i know!
https://www.lulu.com/shop/carson-gross/the-grug-brained-deve...
it's the same content + an index, so not worth buying unless you want the hard copy, but maybe a good convo-starter for the ol'dev team
I smell a formal grammar behind dumbiffied grug english.
nonetheless, I think that when it says:
> so grug say again and say often: complexity very, very bad
at the end of that section, it shoulud say instead:
> so grug say again and say often: complexity very, very, very bad
this disambiguates 3 instances of same concept/idea AND, even better, showcases 3 values of increasing strength like for warning, error, critical use. most compact.
end of groog
But it should be fairly quick, expect an updated version around end of summer or just after.
Grug user find program that does that and more.
Grug user confused by menu.
Grug user wish tool only did keyword research.
Fun fact about Google: logging is like 95% of the job, easily... From tracking everything every service is doing all the time to wrangling the incoming raw crawl data, it's all going through some kind of logging infrastructure.
I was there when they actually ran themselves out of integers; one of their core pieces of logging infrastructure used a protocol buffer to track datatypes of logged information. Since each field in a protocol buffer message is tagged with an integer key, they hit the problem when their top-level message bumped up against the (if memory serves) int16 implementation limit on maximum tag ID and had to scramble to fix it.
and clojure.
mmmmm.
(no affiliation, I enjoy the original and wish for it to reach as many people as possible)
Also like a lot of programming advice it isn't actually that useful. Advice like "avoid complexity" sounds like it is good advice, but it isn't good advice. Of course you should avoid complexity. Telling people to do that is about as useful as telling people to "be more confident".
We mostly learn to avoid complexity through trial and error - working on complex and simple systems, seeing the pitfalls, specific techniques to avoid complexity, what specific complexity is bad, etc. Because not all complexity is bad. You want simplicity? Better trade in you Zen 4 and buy a Cortex M0. And I hope you aren't running a modern OS on it.
Ok "avoid unnecessary complexity"? Great how exactly do you know what's unnecessary? Years of experience that's how. Nothing you can distill to a gimmicky essay.
Thinking you are too smart leads to all sorts of trouble, like using C++ and being proud of it.
If you think your intelligence is a limited resource however you'll conserve it and not waste it on tools, process and the wrong sort of design.
Even if we decided to use Zig for everything, hiring for less popular languages like Zig, lua, or Rust is significantly harder. There are no developers with 20 years experience in Zig
Any decent engineer must be able to work with other languages and tools. What you're looking for is someone with experience building systems in your area of expertise.
And even then, experience is often a poor substitute for competence.
You may in trivia quiz languages that have more features than anyone can learn in a lifetime
"We're going to need to fit parts of this into very constrained architectures."
"Right, so we need a language that compiles directly to machine code with no runtime interpretation."
"Which one should we use?"
"What about Rust?"
"I know zero Rust developers."
"What about C++?"
"I know twenty C++ developers and am confident we can hire three of them tomorrow."
The calculus at the corporate level really isn't more complicated than that. And the thing about twenty C++ developers is that they're very good at using the tools to stamp the undefined behavior out of the system because they've been doing it their entire careers.
Sure C was a huge advance in portability but C and C++ represent a transitional form between an age where you could cleanly spec a special purpose language like COBOL or FORTRAN but not quite spec a general systems programming language and one in which you could. C++, thus, piles a huge amount of complexity on top of a foundation which is almost but not quite right.
I think it's still little appreciated how revolutionary garbage collection is. You don't have maven or cargo for C because you can't really smack together arbitrary C libraries together unless the libraries have an impoverished API when it comes to memory management. In general if you care about performance you would want to pass a library a buffer from the application in some cases, or you might want to pass the library custom malloc and free functions. If your API is not impoverished the library can never really know if the application is done with the buffer and the application can't know if the library is done. But the garbage collector knows!
It is painful to see Rustifarians pushing bubbles around under the rug when the real message the borrow checker is trying to tell them is that their application has a garbage-collector shaped hole in it. "RC all the things" is an answer to reuse but if you are going to do that why not just "GC all the things?" There's nothing more painful than watching people struggle with async Rust because async is all about going off the stack and onto the heap and once you do that you go from a borrowing model that is simple and correct to one that is fraught and structurally unstable -- but people are so proud of their ability to fight complexity they can't see it.
I do think that there are a lot of circumstances a garbage collector is the right answer where people, for whatever reason, decide they want to manage memory themselves instead.
I suspect that in a generation or so Rust will probably be where C++ is now: the language business uses because they can quickly find 20 developers who have a career in it.