At every company I’ve ever worked for, the bottleneck is not “how fast can we spit out more code?” It’s always: “how fast can the business actually decide what they want and create a good backlog?”
Maybe startup development will significantly accelerate with AI churning out all the boilerplate to get your app started.
But enterprise development, where the app is already there and you’re building new features on top of a labyrinthian foundation, is a different beast. The hard part is sitting through planning meetings or untangling weird system dependencies, not churning out net new code. My two cents anyway.
idopmstuff 29 days ago [-]
As a PM I have never not had a backlog of little stuff we'd love to do but can't justify prioritizing. I've also almost always had developers who want to make improvements to the codebase that don't get prioritized because we need new features.
The upside is that both of these things are the kind of tasks that are probably good to give to AI. I've always got little UI bugs that bother me every time I use our application but don't actually break anything and thus won't impact revenue and never get done.
I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Both of those cases feel like places where AI probably gets the job done.
eitally 29 days ago [-]
That sounds good, but if you have a PMO and an enterprise Change Control Board that controls your not-quite-CI/CD deployments, you may find yourself hamstrung. I've been in that position before, where there was simultaneously a bottleneck of clear requirements and also a bunch of stuff (tech debt, small features, bug fixes, UI tweaks) sitting and waiting on a branch ready to deploy when downtime was finally approved. Or, situations where enterprise policy requires human SQA signoff on everything going to prod. There are lots of places you can create inefficiencies in the system and lack of approved requirements is just one.
idopmstuff 29 days ago [-]
Thankfully my career has been at very early stage startups, so none of that applies!
wrl 29 days ago [-]
> developers who want to make improvements to the codebase that don't get prioritized
So, to clarify – developers want to make improvements to the codebase, and you want to give that work to AI? Have you never been in the shoes of making an improvement or a suggestion for a project that you want to work on, seeing it given to somebody else, and then being assigned just more slog that you don't want to do?
I mean, I'm no PM, but that certainly seems like a way to kill team morale, if nothing else.
> I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Blows my mind to think that those are the things you want to give to AI. I'd quit.
sublinear 29 days ago [-]
I completely agree. Those annoying UI bugs and the general need to refactor are often the same technical debt. If you want to make an already bad codebase even worse, giving those tasks to AI is probably the quickest and surest way.
The ability to untangle old bad code and make bigger broader plans for a codebase is precisely where you need human developers the most.
idopmstuff 29 days ago [-]
I'd give them to AI because they're generally just not getting done. I worked hard to get that frontend dev time to make those improvements, but there was no chance it was ever going to be enough. When you're talking about enterprise software, minor improvements to pageload speed do not move the needle on revenue. When you have a list of features that customers will actually pay for, those will get priority 100% of the time.
Everybody's job is to serve the company priorities. Engineers don't get to pick the tasks they want to do because they're getting paid to be there. I also have spent lots of time doing things I'd rather not do, because that's the nature of a job (plus a pile of stock options incentivizes me).
Better to have those tasks done by AI than not at all.
servercobra 29 days ago [-]
There are tons of small improvements I want to make to our codebase that would be great but take effort. Refactors are a great example. We hand those to Devin (or Cursor background agents, etc), review, and we're all happier for it. Our PM uses it fix those little UI annoyances all the time like "update the text on this button". It's been wonderful.
idopmstuff 29 days ago [-]
Really says something about the HN crowd that you're getting downvoted for this.
herval 29 days ago [-]
I never worked at a place where not having a backlog was an issue. Quite the opposite in fact - there’s always infinite backlogs of stuff. Every single time I’ve seen organizations being slow to decide anything, it was due to the human tendency to stretch their tasks to occupy as much time as possible. Planning meetings are “the work” for a legion of people (even though they also know they’re mostly pointless). Untangling dependencies is harder when it involves approvals of other humans (particularly fun as multiple people are “the tech lead”, are all objectively wrong but unable to see how they’re simply getting in the way).
I don’t think LLMs are particularly smart, or capable of, or will definitely replace humans at anything, or if they’ll lead to better work. But I can already tell that their inherent lack of an ego DO accelerate things at enterprises, for the simple reason that the self-imposed roadblocks above stop happening
salt-thrower 29 days ago [-]
At my current workplace, we do have a roadmap for the business, but the actual backlog of tickets to implement work is all waiting on other siloed teams to make decisions that we are downstream of. This ranges from our infrastructure model to simple things like “which CSS components are we allowed to use.”
We are also explicitly NOT allowed to make any code changes that aren’t part of a story that our product owner has approved and prioritized.
The result is that we scrape together some stories to work on every sprint, but if we finish it early, we quickly run into red tape and circular conversations with other “decision makers” who need to tell us what we’re allowed to do before we actually do anything.
It’s fairly maddening. The whole org is hamstrung by a few workaholic individuals who control decision making for several teams and are chronically unavailable as a result.
I’ve seen this sort of thing happen at other big enterprises too but my current situation is perhaps an extreme example of dysfunction. Point being, when an org gets tangled up like this, LLMs aren’t gonna save it :)
herval 29 days ago [-]
The moment those people start being removed, and the little work they do automated, it’ll have a dramatic downstream effect.
I’ve already witnessed a certain big tech that started to move much faster by removing TPMs & EMs across the board, even without LLMs to “replace” them. With LLMs, you need even fewer layers. Then eventually fewer middle-of-business decision makers. In your example, it’s entirely possible that the function of making those components could be entirely subsumed by a single AI bot. That’s starting to happen a lot in the devops space already.
All that said, I doubt your business would benefit from moving faster anyway - most businesses don’t actually need to move faster. I highly recommend the “Bullshit Jobs” book, on this matter. Businesses will just need fewer and fewer people
Aeolun 29 days ago [-]
All of those things will be easier with fewer people involved though?
DoesntMatter22 29 days ago [-]
Yup I agree. The fundamental limiter is humans deciding. But it will trivial to clone apps where things were already decided.
Though AI will probably just proactively add features and open PRs and people can choose
ethbr1 29 days ago [-]
There was a submission a few months ago that boiled down to 'AI will force us to reevaluate our human in the loop decision points.'
Which I expect will be the gist of management consulting reports for the next decade.
If human decision-makers become the bottleneck... eventually that will be reengineered.
I'm fascinated to imagine what change control will need to look like in a majority-AI scenario. Expect there will be a lot more focus on TDD.
DoesntMatter22 27 days ago [-]
Eerily similar to the plot of terminator
kaonwarb 29 days ago [-]
I see several folks commenting on this from the perspective of software engineering. Keep in mind that those are a small minority of Amazon's enormous workforce: an estimate a few years back [0] was 3.5%.
[0] https://newsletter.pragmaticengineer.com/p/amazon
jonny_eh 29 days ago [-]
This is Hacker News sir
pryelluw 29 days ago [-]
“Jassy wrote that employees should learn how to use AI tools and experiment and figure out “how to get more done with scrappier teams.”
Isn’t this their general approach since forever?
happytoexplain 29 days ago [-]
Note that "scrappier" here doesn't just mean fewer staff, but also less experienced and lower-paid staff.
nitwit005 29 days ago [-]
That's something odd with the recent AI hype. Companies that were already using AI, are making statements like this.
Somehow they want to act like they are making a shift, rather than say they were ahead of the trend.
chneu 29 days ago [-]
That's what businesses do. They want to lower payroll, always, and will use every new innovation to do it.
The wording changes, the intention doesn't.
If they could pay you nothing they would.
nitwit005 29 days ago [-]
This doesn't appear to relate to what I said.
goatlover 29 days ago [-]
Sounds like a flaw in the dominant economic model.
ethbr1 29 days ago [-]
I mean, labor is a significant component of prices for goods.
But I expect the increasing income stratification of the 10s+ is a harbinger that we're running out of high-paying jobs for the number of people who are qualified for them.
And the window is closing for countries to agree to something like a structural tax on AI with benefits going to society to address the ills.
Absent that: further stratification, more employee-less businesses, and not a great future
rsynnott 29 days ago [-]
The markets seem to like it, so if you go "we're going AI-first!" every six months, you'll get a little stock price boost. Actually _doing_ anything, naturally, is entirely optional.
Expect this to repeat until the markets choose a new favourite thing (I'm betting on "quantum"; it's getting a lot of press lately and is nicely vague.)
tart-lemonade 28 days ago [-]
"We're going all in on AI" is the modern equivalent to adding .com to your company name to juice the stock price. It's utterly meaningless, yet it works shockingly well.
827a 29 days ago [-]
I find it extremely strange that a company leader though it would be ok to just say "our financial situation is in a place where we cannot adequately staff our teams". The market clearly thought it was strange as well, given their stock performance today.
Really bad look and poor leadership from Jassy. There's a good way to frame adoption of AI, but this is not it.
usefulcat 29 days ago [-]
> The market clearly thought it was strange as well, given their stock performance today.
For 6/17, the S&P 500 was down 0.84%, QQQ (Nasdaq stocks) was down 0.98% and AMZN was down 0.59%.
AMZN slightly outperformed the market today.
827a 29 days ago [-]
With AMZN commanding 5% of the S&P's entire market capitalization, the market is not some independent entity that AMZN can be compared to; the S&P follows what AMZN does.
usefulcat 29 days ago [-]
I might be misunderstanding, but it sounds like you're claiming that the 95% of the S&P that is not Amazon "follows what AMZN does"? If so, I'd like to hear more about exactly how that works because it sounds very unintuitive, to say the least.
In any case, my point was that objectively, AMZN suffered less today than many other stocks, including many other large cap tech (QQQ) and non-tech (S&P) stocks. Considering those facts, it seems like a stretch to claim "the market clearly thought it was strange as well".
827a 28 days ago [-]
You have what I would describe as a Ben Graham '80s-era view of how the stock market works. The stock market is fundamentally a very different beast post-2008. The top ten companies in the S&P 500 make up 36% of the entire index's market capitalization. In 1980, this number was closer to 15%.
One cannot draw any conclusions about how an individual stock in the S&P 10 performs relative to the overall market, because of how correlated these companies are and how much their combined weight contributes to the overall market. Every company in the S&P 10 is a tech company, except Berkshire. They trade together, and how they trade impacts the entire S&P 500.
When Jassy says something, it impacts Google's stock. When it comes out that OpenAI might have to sue Microsoft, it impacts Amazon's stock. Why this happens only makes sense to wall street's HFT systems which, quite honestly, are likely closer at this point to ASI than OpenAI; albeit totally unintelligible in their motives and reasoning.
Amazon did not outperform the market. The market is Amazon. The S&P 10 is not 10 individual companies; its one company.
dale_huevo 29 days ago [-]
Next they'll take away the door desks.
droopyEyelids 29 days ago [-]
Small and scrappy teams work when the team has less than 8 hours of corporate busywork to do a day (Jira, compliance training, triaging 10k alerts from the new scanning software, etc)
AtlasBarfed 29 days ago [-]
[flagged]
pryelluw 29 days ago [-]
I reason they’ve been past the rectum for a long time and now well into the esophagus. Maybe the goal is a human centipede kind of setup? I mean, one human made up of multiple humans has to be the ultimately productivity machine. Plus they only have to pay one salary …
theslurmmustflo 29 days ago [-]
How many h1bs will they ask for during that time?
sterlind 29 days ago [-]
we really need immigration reform. companies prefer H1B workers because they can treat them like indentured servants: they're bound to the company that sponsored their visa, and have only 60 days to find a new job if fired or they'll be deported. companies can also reset the green card process in retaliation if they do leave.
I'm radically pro-immigrant. I want the smartest people from around the world to come work here. I want to unshackle them from their corporate sponsors. the current system is unfair to immigrants (who are bound like serfs to their workplace) and to citizens (who lose jobs because corporations prefer serfs.)
rurp 29 days ago [-]
I'm really surprised there isn't more pushback to the program since it has aspects that piss off post political sides. Maybe it's just too wonky for mainstream political coverage. A system of indentured servants really is the best description, the potential for abuse is both obvious and widespread. For the other side of course they can jobs from Americans in many cases. Big tech companies love hiring people they can abuse, especially if they can also pay them less than local hires.
DragonStrength 29 days ago [-]
My entire old team at Amazon has been reduced from 8 people of which 5 were citizens (and one got his green card while I was there) to 2 immigrants who arrived right before the pandemic both from different at-war countries. I only know this because after the last round of layoffs one of them reached out to me asking if I could get him out of that hell. Seems pretty straightforward what has happened here.
amazingamazing 29 days ago [-]
[flagged]
ofjcihen 29 days ago [-]
TIL immigrant is a race
amazingamazing 29 days ago [-]
When 75%+ of them are non white, yup. Changed to xenophobic tho to help you
ofjcihen 29 days ago [-]
I don’t think all immigrants are H1B holders, are they?
amazingamazing 29 days ago [-]
Are h1bs immigrants?
ofjcihen 29 days ago [-]
Sure, but that’s not what being xenophobic means.
Do you think that maybe it’s possible the OP has a problem with the program and that crying racism whenever someone brings it up might actually be hurting your argument?
umbra07 29 days ago [-]
Is this satire?
ldjkfkdsjnv 29 days ago [-]
Inside view:
Amazon has a document writing culture, all of those documents will be written by AI. People have built careers on writing documents. Same with operations, its all about audit logs. Internally, there are MCPs that have already automated TPMs/PMs/Oncall/maintenance coding. Some orgs in AWS are 90% foreign, there is fear about losing visa status and going back, the automation is just beginning. Sonnet 4 felt like the first time MCPs could actually be used to automate work.
A region expansion scoping project in AWS that required detailed design and inspection of tens of code bases was done in a day, it would usually require two or three weeks of design work.
The automation is real, and the higher are ups are directly monitoring token usage in their org, and pushing senior engineers to increase Q/token usage metrics among low level engineers. Most orgs have a no backfill policy for engineers leaving, they are supplimenting staffing needs with indian contractors, the expectation being that fewer engineers will be needed in a years time.
altairprime 29 days ago [-]
Replacing the topic word “says” with “hopes” is a more precise statement about the mindset driving the creative theft behind AI; only the hope of deprecating all skilled workers in America with one technological advancement, without loss of gross revenue, could justify as severe a gamble as corporations are taking on it.
SimianSci 29 days ago [-]
At this point im convinced that these sorts of headlines are being intentionally put out there as a form of marketing via fear.
What better way to convince people to learn/use your AI offerings than to have those people think their livelihoods are in danger because of them.
AI has provided alot of unique value, but despite the countless headlines stoking fear of mass job loss, there still remains little substance to these claims of being able to automate anything but the most meanial of jobs.
Until we can directly point the finger to AI as the cause of job loss numbers rising, and not other unrelated economic factors, this all just smells of fear mongering with a profit incentive.
SlowTao 29 days ago [-]
Yep, "out tech is so amazing it will take jobs! Please invest now!"
AtlasBarfed 28 days ago [-]
Ceos always are personally marketing their "leadership" to maintain position with the board and stockholders and for future jobs, and to push around peers in the sociopath executive class.
These people universally hate labor.
The entire tech industry went on a firing binge when musk bought Twitter and fired everyone, and nazi salutes have done a bit to blunt his golden boy status in the exec ranks, not THAT much...
Now every CEO is trying to elbow their way to be the AI golden boy. It's worth tens of billions as musk had shown.
smrtinsert 29 days ago [-]
If that's all he sees, it's a hilariously myopic take on the impact of AI.
AI is for coding velocity like electricity is for better room lighting.
We haven't seen the nature of work after AI yet, we're still in a nascent phase. Consider every single white collar role, process, worfklow in your organization up for extreme disruption during this transition period, and it will take at least a decade to even begin to sort out.
poslathian 29 days ago [-]
I like this metaphor about electric lighting. However, having lived in two ~1850 houses, they sure look and function a lot like they did before electricity, despite nearly every element having been “disrupted” by electricity and all the rest.
tqi 29 days ago [-]
As long as managers want to be senior managers, senior managers want to be directors, and directors want to be vice presidents, this will not happen
lpv4n 29 days ago [-]
Is the technological process being weaponized against the working class a thing of our time or the world was always like that? We're living in a period of almost zero job stability and on the top of that we get bombarded everyday with news that things are going to get even worse.
loosetypes 29 days ago [-]
He also released an internal memo on atoz today with a grammatical mistake in the first sentence.
1970-01-01 29 days ago [-]
So what happens during an S3 outage? Who gets blamed when things are offline and the AI isn't sure what to do? Even better, what happens when the AI running on the same region that is down?
mym1990 29 days ago [-]
Surely Amazon would implement redundancy where an AI from a different region to the one that is down would take over. Not sure I follow why "blaming" someone is necessary to move forward with recovering from a failure. In most workplaces, the post mortem is around figuring out what the issue was and remediating it/making sure it doesn't occur in the future, not pointing fingers.
1970-01-01 29 days ago [-]
>Not sure I follow why "blaming" someone is necessary to move forward with recovering from a failure.
I wasn't clear, but to get that AI to admit that it has made an error and getting it to actually correct its error is like trying to put a round peg into a square hole. It will take the blame and continue as if nothing needs to change, no matter what prompts you send it.
mym1990 29 days ago [-]
I think this is assuming that the model doesn't get better through additional training or feedback, but it seems to me that reward/punishment is actually what drives better prediction over the medium/long run. Right now we are definitely in a place where the models are over confident but I think with time the self correction will become very good.
ldjkfkdsjnv 29 days ago [-]
theyve been automating S3 operations for 15 years. the critical AWS services are extremely mature, people have no clue
tartoran 29 days ago [-]
And shrink the product as well even though they will try to look like it's growing. AI will not only eat into the workforce but the profits these companies are making.
podgietaru 29 days ago [-]
I don’t doubt that Amazon will do this. Not because of the efficacy of the tools, but because of the direction Amazon has been going in for years.
But, you know, company that has invested billions in AI selling the idea that AI will be replacing labour is not surprising.
user4673568345 29 days ago [-]
The ole ai perpetual labor machine is a big fat lie
kjsingh 28 days ago [-]
they gotta make a stock price boosting statement every month
exabrial 29 days ago [-]
No it wont lol. It'll just make:
1. The existing codebase worse
2. The existing employees work more
3. The salaries stay flat
AndrewKemendo 29 days ago [-]
You’re making the assumption that these are relevant to the overall goal of Amazon
I’d argue that 1 is irrelevant provided the system continued to extract profit at the same or greater margin
Amazon lives and dies by not caring about #2 so that’s constant
#3 is desirable from Jassy and the boards perspective
Seems like exactly what I’d expect from Amazon
exabrial 29 days ago [-]
> You’re making the assumption that these are relevant to the overall goal of Amazon
not at all! no assumptions made: Their website, technology stack, and SAAS platform is all garbage... yet, they persist in their success to make obscene amounts of money!
msgodel 29 days ago [-]
It's not like Amazon has been optimizing for high quality engineers to begin with.
dymk 29 days ago [-]
I’m of the radical opinion that engineers at Amazon are pretty good, but they’re in a pressure cooker that incentivizes bad work. At minimum, it rarely gives space for people to excel.
herval 29 days ago [-]
Given the business is still growing, even if that’s all they achieve, it’s already a massive win, no?
exabrial 29 days ago [-]
Precisely, I expect their profits to expand, I was strictly speaking of the claim in the title. No doubt they will continue to expand on their current strategies.
Maybe startup development will significantly accelerate with AI churning out all the boilerplate to get your app started.
But enterprise development, where the app is already there and you’re building new features on top of a labyrinthian foundation, is a different beast. The hard part is sitting through planning meetings or untangling weird system dependencies, not churning out net new code. My two cents anyway.
The upside is that both of these things are the kind of tasks that are probably good to give to AI. I've always got little UI bugs that bother me every time I use our application but don't actually break anything and thus won't impact revenue and never get done.
I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Both of those cases feel like places where AI probably gets the job done.
So, to clarify – developers want to make improvements to the codebase, and you want to give that work to AI? Have you never been in the shoes of making an improvement or a suggestion for a project that you want to work on, seeing it given to somebody else, and then being assigned just more slog that you don't want to do?
I mean, I'm no PM, but that certainly seems like a way to kill team morale, if nothing else.
> I had a frontend engineer, who, when I could just find a way to give him time to do whatever he wanted, would just constantly make little improvements that would incrementally speed up pageload.
Blows my mind to think that those are the things you want to give to AI. I'd quit.
The ability to untangle old bad code and make bigger broader plans for a codebase is precisely where you need human developers the most.
Everybody's job is to serve the company priorities. Engineers don't get to pick the tasks they want to do because they're getting paid to be there. I also have spent lots of time doing things I'd rather not do, because that's the nature of a job (plus a pile of stock options incentivizes me).
Better to have those tasks done by AI than not at all.
I don’t think LLMs are particularly smart, or capable of, or will definitely replace humans at anything, or if they’ll lead to better work. But I can already tell that their inherent lack of an ego DO accelerate things at enterprises, for the simple reason that the self-imposed roadblocks above stop happening
We are also explicitly NOT allowed to make any code changes that aren’t part of a story that our product owner has approved and prioritized.
The result is that we scrape together some stories to work on every sprint, but if we finish it early, we quickly run into red tape and circular conversations with other “decision makers” who need to tell us what we’re allowed to do before we actually do anything.
It’s fairly maddening. The whole org is hamstrung by a few workaholic individuals who control decision making for several teams and are chronically unavailable as a result.
I’ve seen this sort of thing happen at other big enterprises too but my current situation is perhaps an extreme example of dysfunction. Point being, when an org gets tangled up like this, LLMs aren’t gonna save it :)
I’ve already witnessed a certain big tech that started to move much faster by removing TPMs & EMs across the board, even without LLMs to “replace” them. With LLMs, you need even fewer layers. Then eventually fewer middle-of-business decision makers. In your example, it’s entirely possible that the function of making those components could be entirely subsumed by a single AI bot. That’s starting to happen a lot in the devops space already.
All that said, I doubt your business would benefit from moving faster anyway - most businesses don’t actually need to move faster. I highly recommend the “Bullshit Jobs” book, on this matter. Businesses will just need fewer and fewer people
Though AI will probably just proactively add features and open PRs and people can choose
Which I expect will be the gist of management consulting reports for the next decade.
If human decision-makers become the bottleneck... eventually that will be reengineered.
I'm fascinated to imagine what change control will need to look like in a majority-AI scenario. Expect there will be a lot more focus on TDD.
Isn’t this their general approach since forever?
Somehow they want to act like they are making a shift, rather than say they were ahead of the trend.
The wording changes, the intention doesn't.
If they could pay you nothing they would.
But I expect the increasing income stratification of the 10s+ is a harbinger that we're running out of high-paying jobs for the number of people who are qualified for them.
And the window is closing for countries to agree to something like a structural tax on AI with benefits going to society to address the ills.
Absent that: further stratification, more employee-less businesses, and not a great future
Expect this to repeat until the markets choose a new favourite thing (I'm betting on "quantum"; it's getting a lot of press lately and is nicely vague.)
Really bad look and poor leadership from Jassy. There's a good way to frame adoption of AI, but this is not it.
For 6/17, the S&P 500 was down 0.84%, QQQ (Nasdaq stocks) was down 0.98% and AMZN was down 0.59%.
AMZN slightly outperformed the market today.
In any case, my point was that objectively, AMZN suffered less today than many other stocks, including many other large cap tech (QQQ) and non-tech (S&P) stocks. Considering those facts, it seems like a stretch to claim "the market clearly thought it was strange as well".
One cannot draw any conclusions about how an individual stock in the S&P 10 performs relative to the overall market, because of how correlated these companies are and how much their combined weight contributes to the overall market. Every company in the S&P 10 is a tech company, except Berkshire. They trade together, and how they trade impacts the entire S&P 500.
When Jassy says something, it impacts Google's stock. When it comes out that OpenAI might have to sue Microsoft, it impacts Amazon's stock. Why this happens only makes sense to wall street's HFT systems which, quite honestly, are likely closer at this point to ASI than OpenAI; albeit totally unintelligible in their motives and reasoning.
Amazon did not outperform the market. The market is Amazon. The S&P 10 is not 10 individual companies; its one company.
I'm radically pro-immigrant. I want the smartest people from around the world to come work here. I want to unshackle them from their corporate sponsors. the current system is unfair to immigrants (who are bound like serfs to their workplace) and to citizens (who lose jobs because corporations prefer serfs.)
Do you think that maybe it’s possible the OP has a problem with the program and that crying racism whenever someone brings it up might actually be hurting your argument?
Amazon has a document writing culture, all of those documents will be written by AI. People have built careers on writing documents. Same with operations, its all about audit logs. Internally, there are MCPs that have already automated TPMs/PMs/Oncall/maintenance coding. Some orgs in AWS are 90% foreign, there is fear about losing visa status and going back, the automation is just beginning. Sonnet 4 felt like the first time MCPs could actually be used to automate work.
A region expansion scoping project in AWS that required detailed design and inspection of tens of code bases was done in a day, it would usually require two or three weeks of design work.
The automation is real, and the higher are ups are directly monitoring token usage in their org, and pushing senior engineers to increase Q/token usage metrics among low level engineers. Most orgs have a no backfill policy for engineers leaving, they are supplimenting staffing needs with indian contractors, the expectation being that fewer engineers will be needed in a years time.
AI has provided alot of unique value, but despite the countless headlines stoking fear of mass job loss, there still remains little substance to these claims of being able to automate anything but the most meanial of jobs. Until we can directly point the finger to AI as the cause of job loss numbers rising, and not other unrelated economic factors, this all just smells of fear mongering with a profit incentive.
These people universally hate labor.
The entire tech industry went on a firing binge when musk bought Twitter and fired everyone, and nazi salutes have done a bit to blunt his golden boy status in the exec ranks, not THAT much...
Now every CEO is trying to elbow their way to be the AI golden boy. It's worth tens of billions as musk had shown.
AI is for coding velocity like electricity is for better room lighting.
We haven't seen the nature of work after AI yet, we're still in a nascent phase. Consider every single white collar role, process, worfklow in your organization up for extreme disruption during this transition period, and it will take at least a decade to even begin to sort out.
I wasn't clear, but to get that AI to admit that it has made an error and getting it to actually correct its error is like trying to put a round peg into a square hole. It will take the blame and continue as if nothing needs to change, no matter what prompts you send it.
But, you know, company that has invested billions in AI selling the idea that AI will be replacing labour is not surprising.
1. The existing codebase worse
2. The existing employees work more
3. The salaries stay flat
I’d argue that 1 is irrelevant provided the system continued to extract profit at the same or greater margin
Amazon lives and dies by not caring about #2 so that’s constant
#3 is desirable from Jassy and the boards perspective
Seems like exactly what I’d expect from Amazon
not at all! no assumptions made: Their website, technology stack, and SAAS platform is all garbage... yet, they persist in their success to make obscene amounts of money!