Recent Bookmarks and Annotations
-
Fear of Missing Out is Not a Good Reason to Implement AI on Mar 26, 26
-
With nearly every enterprise feeling the pressure to use AI immediately, some are not applying AI to the right use cases.
-
For many enterprises, the rush to use AI has to do with the fear of missing out and not wanting to be behind on innovation.
-
While primarily a hardware vendor, HP also offers Workforce Experience Platform, a SaaS program that helps IT teams manage employee digital experiences across hardware devices such as PCs, Macs, printers and virtual machines.
-
what Masud said enterprises should aim to use AI technology for: as a problem solver."
-
-
"But to come up and manufacture a problem so that AI can solve it, that's where the FOMO is kicking in, and it's been happening for a while."
-
-
most of the time, humans must be in the loop to keep the AI system accountable and help the organization.
6 more annotations...
-
IA : les entreprises vont payer la facture - Le Monde Informatique on Mar 25, 26
-
es principaux fournisseurs d'IA et d'infrastructures destinées à ces environnements ont investi plus de 1 000 Md$ (860 Md€) dans cette technologie
-
Ces sociétés cherchent en effet à rentabiliser leurs dépenses, et ce sont les entreprises qui pourraient principalement en faire les frais.
-
Habituellement, le coût des nouvelles technologies pour les entreprises tend à diminuer avec le temps, mais certains observateurs estiment que la récente ruée vers l'or de l'IA pourrait retarder ces baisses de prix de plusieurs années.
-
e pouvoir de fixer les prix se déplace du client vers le fournisseur, et la possibilité de négocier en position de force se réduit comme peau de chagrin, ajoute-t-il. Les entreprises ne sont pas les bénéficiaires passifs des progrès de l'IA. Elles sont la source de revenus. »
-
Les responsables informatiques se plaignent depuis des années des coûts des déploiements cloud auprès d'un fournisseur unique. « Alors que l'IA s'intègre de plus en plus profondément dans les opérations et évolue plus rapidement que le cloud ne l'a jamais fait, il est crucial de privilégier des options multiples dès maintenant, avant que la dépendance vis-à-vis d'un fournisseur unique ne devienne un frein important »,
-
En 2018, déjà, Jensen Huang, PDG de Nvidia, prévoyait que les investissements dans les infrastructures d'IA atteindraient 4000 Md$ (3500 Md€) d'ici la fin de la décennie 2020.
-
Plus récemment, il a suggéré aux organisations de privilégier l'expérimentation plutôt que de juger les projets d'IA sur la base d'un ROI à court terme, ou au contraire de son absence. Un conseil qui lui a valu quelques sarcasmes.
-
C'est un signal d'alarme, car cela indique généralement des objectifs de vente et de profit trop ambitieux, analyse-t-il. Ces objectifs agressifs incitent les équipes commerciales à privilégier une vision à court terme pour préserver leur emploi, même si cela signifie que les clients perdent le leur en raison de promesses non tenues.
-
C'est la célèbre analogie avec le trafiquant de drogue. Ces entreprises essaient de rendre les individus et les organisations accros à une drogue appelée IA, ose-t-il. Si l'IA est incontestablement puissante, dès que les coûts réels entrent en jeu, les entreprises doivent évaluer sa valeur au regard de ces coûts, qui seront évidemment bien moins avantageux qu'aujourd'hui.
-
la concurrence reste forte entre les grands fournisseurs, même si une consolidation est envisageable à long terme
-
« Qui plus est, d'autres formes de concurrence émergent en réponse aux préoccupations des entreprises concernant les coûts, la souveraineté des données et d'autres questions,
-
« ce qui signifie probablement une baisse des marges bénéficiaires, mais aussi de nouvelles sources de créativité,
-
Certaines entreprises du marché se dévalueront, tandis que d'autres émergeront avec de nouvelles opportunités. »
-
». Le CTO de Rimini Street s'inquiète davantage de la façon dont les entreprises utilisent ces outils : « Il est essentiel de veiller à ce que l'abstraction et la gouvernance soient intégrées dès le départ afin d'éviter une dépendance excessive ».
-
Selon lui, les DSI doivent se concentrer davantage sur la valeur ajoutée apportée par les outils d'IA qu'ils déploient que sur leur prix
-
Si l'IA améliore sensiblement la productivité, la prise de décision et les résultats, le modèle économique est avantageux, affirme Eric Helmer. Dans le cas contraire, aucune baisse de prix future ne justifiera de telles dépenses. »
14 more annotations...
-
The AI Industry Is Lying To You on Mar 25, 26
-
new data center capacity additions (as in additions to the pipeline, not brought online) halved in the fourth quarter of 2025
-
The decline underscores the difficulties of the current development environment and signals a resulting focus on existing pipeline projects.
-
s the utility provider is only responsible for getting power to the facility, not generating the power itself
-
This means that fifty eight god damn percent of data centers need to work out their own power somehow.
-
Only 33% of announced US data centers are actually being built, with the rest in vague levels of “planning.”
-
Even if they were, there’s not enough power for them to turn on.
-
which would require at least 1.95GW of power just to run, when you include all the associated gear and the challenges of physically getting power.
-
- None of this data talks about data centers actually coming online.
-
-
The problem is, this number doesn’t actually express newly-turned-on data centers.
-
these are facilities that have been “delivered” or have a “committed tenant.
-
A “committed tenant” could mean anything from “we’ve signed a contract and we’re raising funds”
-
That’s great! Except Avison Young has chosen to define absorption in an entirely different way — that a data center (in whatever state of construction it’s in) has been leased, or “delivered,” which means “a fully ready-to-go data center” or “an empty warehouse with power in it.”
-
I’m comfortable estimating that North American data center absorption — as the IT load of data centers actually turned on and in operation — was at around 3GW for 2025, which would work out to about 3.9GW of total power.
-
With the acceleration of NVIDIA’s GPU sales, it now takes about 6 months to install and operationalize a single quarter’s worth of sales.
-
I should also add that it’s commonplace for hyperscalers to buy the GPUs for their colocation partners to install,
-
It’s becoming very obvious that data center construction is dramatically slower than NVIDIA’s GPU sales, which continue to accelerate dramatically every single quarter.
-
By the time they actually get all the Blackwells in Vera Rubin will be two years old! And by the time we install those Vera Rubins, some other new GPU will be beating it!
-
Camus explains that the biggest problem is a lack of transmission capacity (the amount of power that can be moved) and power generation (creating the power itself):
-
The biggest driver of delay is simple: our power system doesn’t have enough extra transmission capacity and generation to serve dozens of gigawatts of new, high-utilization demand 100% of the time.
-
it takes way longer to build a data center than anybody is letting on, as evidenced by the fact that we only added 3GW or so of actual capacity in America in 2025.
-
-
-
Abilene would now be 8 buildings, with the first two buildings being energized by the “first half of 2025,” and that the rest would be “energized by mid-2026.”
-
-
At its current rate of construction, Stargate Abilene will be fully built sometime in late 2027. Oracle’s Port Washington Data Center, as of March 6 2026,
consisted of a single steel beam.
-
And, despite Microsoft
trying to mislead everybody that its Wisconsin data center had ‘arrived” and “been built,” looking even an inch deeper suggests very little has actually come online”
-
Microsoft, like everybody else, is building data centers at a glacial pace, because construction takes forever, even if you have the power, which nobody does!
-
Everybody keeps yammering on about “what if data centers don’t have power” when they should be thinking about whether data centers are actually getting built.
-
hese deals are announced, usually by overly-eager reporters that don’t bother to check whether the previous data centers ever got built, as massive “multi-gigawatt deals,” and then nobody follows up to check whether anything actually happened.
-
I think a lot of these data center deals are trash, will never get built, and thus will never get paid.
-
These things aren’t getting built, or if they’re getting built, it’s taking way, way longer than expected, which means that interest on that debt is piling up.
-
-
Here’s the reality: data centers take forever. Every hyperscaler and neocloud talking about “contracted compute” or “planned capacity” may as well be telling you about their planned dinners with The Grinch and Godot.
-
Hundreds of billions of dollars have been sunk into buying GPUs, in some cases years in advance, to put into data centers that are being built at a rate that means that NVIDIA’s 2025 and 2026 revenues will take until 2028 to 2029 to actually operationalize, and that’s making the big assumption that any of it actually gets built.
-
Simple questions like “where are the GPUs going?” and “how many actual GPUs have been installed?” are left unanswered as article after article gets written about massive, multi-billion dollar compute deals for data centers that won’t be built before, at this rate, 2030.
-
Then there’s the very, very obvious scandal that NVIDIA, the largest company on the stock market, is making hundreds of billions of dollars of revenue on chips that aren’t being installed
-
As a neocloud, Megaspeed rents out AI compute capacity like CoreWeave, and while NVIDIA (and Megaspeed) both deny any of their GPUs are going to China, Megaspeed, to quote Bloomberg, has “something of a Chinese corporate twin”:
-
several of the “computing clusters under construction” actually being in China.
-
While all of this is extremely weird and suspicious, I must be clear there is no declarative answer as to what’s going on, other than that NVIDIA GPUs are absolutely making it to China, somehow.
-
Huang did, however,
say back in May 2025 that there was “no evidence of any AI chip diversion,’ and that the countries in question “monitor themselves very carefully.”
-
In the event that NVIDIA had knowledge — which I am not saying it did, of course — this is a huge scandal that, for the most part, nobody has bothered to keep an eye on
-
emanding the Department of Commerce take “all necessary and appropriate actions” to stop the flow of NVIDIA chips to China,
-
hyperscalers are forcing everybody in their companies to use AI tools as much as possible, tying compensation and performance use to token burn, and actively encouraging non-technical people to vibe-code features that actually reach production.
-
this means that everybody is being expected to dick around with AI tools all day, with the expectation that you burn massive amounts of tokens and, in the case of designers working in some companies, actively code features without ever knowing a line of code.
-
While non-technical workers aren’t necessarily allowed to ship directly to production, their horrifying pseudo-software, coded without any real understanding of anything, is expected to be “fixed” by actual software engineers who are also expected to do their jobs.
-
According to internal Meta communications and an incident report seen by The Information, a major security alert occurred last week after a Meta software engineer used an in-house agent tool, similar to OpenClaw, to analyze a technical question that another Meta employee had posted on an internal discussion forum.
-
On March 2, customers across Amazon marketplaces saw incorrect delivery times when adding items to their carts. The incident led to nearly 120,000 lost orders and roughly 1.6 million website errors.
-
I believe that these events are just the beginning of the true consequences of AI coding tools: the slow destruction of the tech industry’s software stack.
-
The problem is that while LLMs can write “all” code, that doesn’t mean the code is good, or that somebody can read the code and understand its intention
-
LLM-based code is often verbose, and rarely aligns with in-house coding guidelines and standards
-
Why is anybody at Meta using an OpenClaw? What is the actual thing that OpenClaw does, other than burn an absolute fuck-ton of tokens?
-
To be explicit, allowing an LLM to write all of your code means that you are no longer developing code, nor are you learning how to develop code, nor are you going to become a better software engineer as a result. This means that, across almost every major tech company, software engineers are being incentivized to stop learning how to write software or solve software architecture issues.
-
Meta and
Amazon love to lay off thousands of people at a time,
which makes it even harder to work out why something was built in the way it was built, which is even harder when an LLM that lacks any thoughts or intentions builds it.
-
We’re already seeing the consequences! Amazon lost hundreds of thousands of orders! Meta had a major security breach! The foundations of these companies are being rotted away through millions of lines of slop-code that, at best, occasionally gets the nod from somebody who has “software engineer” on their resume,
-
This is creating a kind of biblical plague within software engineering — an entire tech industry built on reams of unmanageable and unintentional code pushed by executives and managers that don’t do any real work.
-
An engineer at OpenAI processed 210 billion “tokens” — enough text to fill Wikipedia 33 times — through the company’s artificial intelligence models over the last week, the most of any employee. At Anthropic, a single user of the company’s A.I. coding system, Claude Code, racked up a bill of more than $150,000 in a month.
-
with one software engineer in Stockholm spending “more than his salary in tokens,” though Roose adds that his company pays for them.
-
one person found a loophole in Claude’s $20-a-month using a piece of software made by Figma that allowed them to burn $70,000 in tokens.
-
These people are sick, and are participating in a vile, poisonous culture based on needless expenses and endless consumption.
-
Companies incentivizing the amount of tokens you burn are actively creating a culture that trades excess for productivity, and incentivizing destructive tendencies built around constantly having to find stuff to do rather than do things with intention.
-
, I used AI, but, in my opinion, it was nowhere close to replacing my role. Before I was laid off, I helped build an internal site for Amazon using AI. I hadn't really coded before, but with a colleague's help, I learned how to vibe code with a lot of trial and error.
I thought using AI for this project and showcasing different skills would make me more valuable to the company, but in the end, it didn't keep me from being laid off.
-
-
Large Language Models are creating Silicon Valley Habsburgs — workers that are intellectually trapped at whatever point they started leaning on these models that were subsidized to the point that their bosses encouraged them to use them as much as humanly possible.
-
hearing Spotify’s CEO say that its top developers are basically not writing code anymore makes me deeply worried, because this shit isn’t replacing software engineering at all — it’s mindlessly removing friction and putting the burden of “good” or “right” on a user that it’s intentionally gassing up.
-
Generative code is a digital ecological disaster, one that will take years to repair thanks to company remits to write as much code as fast as possible.
-
n the end, everything about AI is built on lies.
-
Hundreds of gigawatts of data centers in development equate to 5GW of actual data centers in construction.
-
Hundreds of billions of dollars of GPU sales are mostly sitting waiting for somewhere to go.
-
-
Every executive forcing their workers to use AI is a ghoul and a dullard, one that doesn’t understand what actual work looks like, likely because they’re a lazy, self-involved prick.
-
AI is actively poisonous to the future of the tech industry. It’s expensive, unproductive, actively damaging to the learning and efficacy of its users, depriving them of the opportunities to learn and grow, stunting them to the point that they know less and do less because all they do is prompt.
-
And in the end, AI is a test of your introspection. Can you tell when you truly understand something? Can you tell why you believe in something, other than that somebody told you you should, or made you feel bad for believing otherwise?
71 more annotations...
-
CEO Confronted Over Using AI to Clone Real People Without Their Consent on Mar 25, 26
-
Last August, it quietly debuted a feature called “Expert Review,” through which users could get
feedback on their writing from what the company styled as AI clones of professional writers.
-
“You do not have our permission to use our names to do this,” Patel challenged early in the interview. “You had little check marks next to the name that indicated it was somehow official. People did not like this, I did not like this, and you removed the feature.”
-
“First off, I’d say I understand and respect how challenging a world it is for experts and idea generators these days,”
-
we under-delivered for them. And I’d really like to apologize for that. That was not our intention.”
-
“It wasn’t good for experts, it wasn’t good for users. It was a fairly buried feature. It had very little usage,”
-
“When somebody uses your content, should they attribute you? Of course. And to attribute you, you have to use your name,”
-
Respectfully, we believe the claims are without merit. The idea that the feature is impersonation is quite a big stretch.”
-
the lawsuit is really about using names and identities for commercial purposes without their consent.
6 more annotations...
-
IA et productivité: la grande leçon de l'industrie textile on Mar 24, 26
-
en confiant à chaque ouvrier un troisième métier à tisser, la production augmenterait de moitié. Pourtant, ça n’a pas fonctionné. Et la raison pour laquelle cela n’a pas fonctionné nous permet de comprendre ce qui se passe aujourd’hui avec l’intelligence artificielle.
-
le véritable travail de l’ouvrier ne consistait plus… à tisser, mais à surveiller le tissage
-
Ce n’étaient pas les machines qui étaient le goulet d’étranglement. C’était la capacité des humains à les surveiller.
-
Les usines ont donc dû ralentir leurs machines de 15% pour maintenir une qualité acceptable.
-
Soixante ans plus tard, un seul ouvrier faisait fonctionner… 18 métiers à tisser et produisait cinquante fois plus qu’un siècle auparavant. Mais pour obtenir cela, les usines avaient dû tripler leur investissement en formation par ouvrier.
-
Christian Catalini, Xiang Hui et Jane Wu, affirment que cette même dynamique se reproduit aujourd’hui avec l’IA, mais plus rapidement et à une échelle bien plus grande
-
Mais le coût de la vérification de cette production — confirmer qu’elle est correcte, qu’elle ne constitue pas une hallucination, qu’on peut s’y fier — augmente car il reste dépendant du jugement humain.
-
n peut désormais produire autant que l’on veut, et pour de moins en moins cher. Mais si on ne peut pas évaluer de manière fiable ce qui est produit, une part croissante de cette production sera, au mieux, inutile. Or, cette vérification est très difficile.
-
a réponse naturelle à ce problème consisterait… à utiliser l’IA pour contrôler l’IA!
-
Les systèmes d’IA entraînés sur des données similaires partagent les mêmes angles morts.
-
La vérification doit donc être faite par des humains, et la complexité de celle-ci nécessite qu’ils soient bien formés pour cela.
-
Une conception déshumanisée du travail fait croire qu’il n’est plus nécessaire que les deux se parlent et travaillent en commun. Le système professionnel vertueux qui produisait autrefois un jugement humain développé par l’expérience est bloqué aux deux extrémités.
-
La solution n’était pas de ralentir les machines; elle consistait à former sérieusement les personnes chargées de les surveiller.
-
on ne peut pas penser l’introduction d’une nouvelle technologie en termes stricts de substitution humain-machine.
-
ce qui compte n’est pas la technologie, mais
la façon dont elle est mobilisée, et que l’enjeu est d’inventer un système de création de valeur et une organisation qui en tirent le meilleur parti.
13 more annotations...
-
AI and the Future of Work: Liberation vs. Control… | Corporate Rebels on Mar 24, 26
-
-
More monitoring. More control. More centralization. Fewer decisions for the people closest to the work.
-
Sure, AI can make your company more efficient. The question nobody seems to be asking is: efficient at what?
-
“And now AI shows up, and the first instinct of most companies is to use it to do the exact opposite of everything that actually works.”
-
AI does not automatically liberate or oppress. It amplifies whatever system it is plugged into.
-
-
Pathway 1: Control. Use AI to monitor, measure, rank, and replace.
-
Pathway 2: Liberation. Use AI to eliminate the boring tasks that make people hate Mondays
-
Which layers of corporate bureaucracy can AI genuinely replace, and which ones will it accidentally reinforce?
-
AI use is heavily concentrated in high-level, non-routine occupations where digital infrastructure already exists.
-
Unless organizations
deliberately redesign how AI is deployed, the technology will reinforce existing power structures rather than replace bureaucracy.
-
When AI makes a team of 10 as productive as a team of 20, what happens to the other 10?
-
But a broader study by economists Johnston and Makridis found that high-AI-exposed occupations actually saw gains in both wages and total employment
-
A Swedish study tracking firms that received AI adoption grants showed increased job postings with no net employment decrease over five years.
-
The pattern seems to depend entirely on intent: are companies using productivity gains to grow, or simply to
cut headcount?
-
Can AI serve as the operating system for a team? These are the questions where we have found the least existing research. That is exactly why we are pursuing them.
-
History shows that technological revolutions ultimately create more jobs than they destroy, but the transition matters enormously.
-
Let's use AI to give humans wings, not to put them on a leash.
-
AI will amplify whatever system it is plugged into. So before you add AI to your organization, make sure the organization itself is worth amplifying.
17 more annotations...
-
La collaboration entre agents IA est vouée à l'échec on Mar 22, 26
-
Certains défenseurs de l'IA présentent une vision où des dizaines d'agents collaborent pour résoudre des problèmes complexes avec une intervention humaine minimale, voire nulle. Or, ce scénario relève du mythe.
-
les agents IA peuvent être efficaces lorsqu'ils travaillent individuellement sur des tâches distinctes, mais lorsqu'ils sont associés pour accomplir des missions complexes, ils échouent la plupart du temps.
-
Un agent ignore les instructions de ses congénères, refait le travail déjà effectué, peine à déléguer et se retrouve fréquemment tributaire des paralysies provenant de la planification.
-
« Les systèmes d'IA échouent pour les mêmes raisons structurelles que les organisations humaines, malgré la suppression de tous les facteurs causaux propres à l'humain »,
-
Pas de motivation professionnelle. Pas d'ego. Pas de politique. Pas de fatigue. Pas de normes culturelles. Pas de compétition pour le statut. Les agents sont des modèles de langage exécutant des instructions. Le dysfonctionnement est apparu malgré tout.
-
plus le nombre d'agents augmente et plus leur structure organisationnelle est complexe, plus ils échouent fréquemment à accomplir les tâches qui leur sont assignées,
-
Lorsqu'un seul agent était utilisé pour produire le résultat, les agents ont réussi lors des 28 tentatives.
-
avec plusieurs agents au sein d'une organisation hiérarchique, l'un d'eux assignant des tâches aux autres, le taux d'échec est monté à 36%.
-
Enfin, un pipeline à 11 étapes, ou essaim organisationnel, ne donne jamais de résultat satisfaisant. En réalité, ce pipeline a englouti la totalité de son budget sur le projet en seulement cinq phases de planification, sans produire une seule ligne de code.
-
« Chaque expérience que j'ai menée a échoué de manière contre-intuitive, et précisément de la façon dont elle est censée être conçue pour ne pas échouer,
-
Les mêmes schémas de défaillance qui caractérisent les organisations humaines - excès de critiques, contrôle d'accès basé sur les préférences, conflits de gouvernance, épuisement des budgets dû à des problèmes de coordination - émergent dans les systèmes d'IA multi-agents avec des signatures mathématiques identiques »,
-
Le substrat change ; la physique de la coordination à grande échelle reste constante. »
-
. Les agents individuels travaillant sur des tâches discrètes et bien définies sont fiables, mais la collaboration multi-agents échoue souvent,
-
La complexité de la coordination, la transmission du contexte et la propagation des erreurs entre les agents reflètent les dysfonctionnements organisationnels humains à grande échelle. »
-
De l'extérieur, cela ressemble à une coopération multi-agents, mais en termes d'architecture, il s'agit d'une spécialisation séquentielle avec des transferts déterministes et des points de contrôle humains intégrés. »
-
La véritable valeur des agents IA aujourd'hui réside dans l'automatisation à grande échelle de tâches répétitives et bien définies, en assistant les analystes humains grâce à un traitement rapide des données et des résultats cohérents, souligne l'ingénieur. Il ne s'agit pas d'une intelligence collective émergente.
-
« À chaque transfert entre systèmes, le sens se perd, le contexte se comprime et des suppositions sont formulées,
-
Les agents, eux, ne discutent pas dans les couloirs. »
-
L'argument marketing voulant que des dizaines d'agents travaillent ensemble de manière autonome est une utopie qui contredit la théorie de l'information,
-
L'idée que des dizaines d'agents puissent collaborer spontanément, sans supervision ni limites, est aussi absurde que celle d'humains le faisant,
-
symbl, fournisseur de solutions de gestion des effectifs, a déployé plus de 150 agents, mais leurs interactions sont strictement contrôlées
-
Ils se coordonnent entre eux et avec leurs collègues humains grâce à la couche d'orchestration que nous avons mise en place,
-
Nous avons des agents d'IA dédiés à des tâches spécifiques et d'autres dotés d'une mémoire et de listes de tâches partagées,
-
a clé réside dans la clarté des rôles avant le déploiement. De quoi ce travailleur numérique est-il responsable ? D'où provient le travail ? Où est-il acheminé ? Dans quels cas une intervention humaine est-elle nécessaire ? »
-
l'échec des systèmes multi-agents est un problème d'organisation et d'orchestration, et non un problème technologique en soi, ajoute-t-il
-
L'étude montre que les agents souffrent des mêmes problèmes de coordination que les humains lorsqu'ils travaillent ensemble
-
Le modèle idéal est celui d'une main-d'oeuvre hybride : des travailleurs numériques (soit les agents, NDLR) aux rôles clairement définis, des travailleurs humains assurant la supervision et le jugement, et une couche d'orchestration reliant les deux
25 more annotations...
-
Microsoft et OpenAI : La fin de la lune de miel en 7 actes | LinkedIn on Mar 22, 26
-
Novembre 2023. Microsoft 365 Copilot est lancé à 30 $/mois par utilisateur, avec des fanfares de communiqués de presse. Les entreprises achètent des licences — parce qu'elles doivent <!---->"faire de l'IA"<!----><!---->. Puis les verrouillent dans un tiroir.
-
: sur 450 millions de licences M365 commerciales, seules 3,3% aboutissent à une adoption réelle de Copilot payant.
-
Nadella frappe un grand coup : il recrute <!---->Mustafa Suleyman<!----><!---->, cofondateur de DeepMind
-
a mission officieuse : construire des modèles propriétaires — le projet <!---->MAI<!----> — et réduire structurellement la dépendance à OpenAI.reuters+1
-
"La relation Microsoft-OpenAI a craqué le jour où ils ont embauché Mustafa Suleyman."
-
n août 2024, Bloomberg titre sans ambiguïté : <!---->"Big Tech Fails to Convince Wall Street That AI Is Paying Off."
-
rois class actions sont déposées contre Apple pour publicité mensongère
-
OpenAI <!---->acquiert Windsurf<!----> (3 milliards $) et refuse à Microsoft l'accès à sa propriété intellectuelle
-
OpenAI and Microsoft Tensions Are Reaching a Boiling Point."<!----> La clause contractuelle prévoyant que Microsoft perde l'accès aux modèles dès l'atteinte de l'AGI devient une bombe à retardement
-
Microsoft convertit ses droits sur les profits futurs en une participation de 27% dans la nouvelle entité for-profit d'OpenAI.
-
Le 3 novembre 2025, OpenAI signe un accord de <!---->38 milliards de dollars<!----> avec <!---->Amazon Web Services<!----> — son tout premier grand contrat cloud hors Microsoft.
-
'accord total atteint <!---->110 milliards de dollars<!----> sur 8 ans, incluant l'engagement d'OpenAI à consommer <!---->2 gigawatts<!----> de puissance de calcul sur les puces Trainium d'Amazon
-
Pour Microsoft, c'est une trahison contractuelle. Selon le Financial Times, Redmond envisage des <!---->poursuites judiciaires<!----> contre OpenAI et AWS,
-
Azure garde les API sans mémoire d'état — AWS hérite des environnements "stateful", ceux où les agents IA mémorisent le contexte,
-
En janvier 2026, Apple et Google officialisent un <!---->partenariat pluriannuel stratégique<!----> :
-
Bloomberg révèle en février 2026 que la refonte de Siri est à nouveau repoussée — probablement à fin 2026, voire au-delà. Apple a jugé que la fondation technique de Google était simplement plus solide pour ses besoins. Pour OpenAI, perdre Apple — l'écosystème de référence du grand public — est un signal d'alarme
-
Face à l'instabilité de sa relation avec OpenAI, Microsoft joue une autre carte. En novembre 2025, Microsoft, NVIDIA et Anthropic signent un partenariat tripartite : Anthropic s'engage à acheter 30 milliards de dollars de capacité sur Azure, Microsoft et NVIDIA investissent respectivement 5 et 10 milliards dans
-
La <!---->fusion des équipes Copilot commercial et grand public<!----> sous <!---->Jacob Andreou<!----> (ex-VP Snap), rattaché directement au CEO. Et <!---->Mustafa Suleyman<!----> — recruté en grande pompe deux ans plus tôt pour "révolutionner Copilot" — est écarté des responsabilités produit quotidiennes.
-
La séparation artificielle entre Copilot grand public et enterprise était une erreur de conception.
-
Ce n'est pas un marché qui se stabilise. C'est un marché qui se réorganise en profondeur, sous la pression conjointe de la performance technique, du ROI introuvable, et des enjeux de souveraineté cloud.
-
La vraie leçon de cette période : l'IA générative est une rupture de processus avant d'être une rupture technologique. Avoir les meilleurs modèles ne suffit pas à créer de l'usage. Avoir l'usage ne suffit pas à créer de la valeur.
19 more annotations...
-
Software-Defined Products: Why Projects Must Reinvent Themselves on Mar 20, 26
-
Machines are learning to think. And they’re getting better at it every day.
-
-
That I can confidently argue the opposite today is a testament to how remarkably successful engineering has been. We’ve climbed high up the curve of mechanical innovation — so high that we’re beginning to see the ceiling.
We are approaching saturation — a point where more no longer means much more.
-
The pace is slowing — not because engineers have run out of ideas, but because they were so effective in the first place.
-
The physical world operates by natural laws — and those laws are largely understood.
-
When it comes to logic —meaning what machines can think and decide — the picture looks entirely different. Here, we are still at the beginning.
-
-
Highly complex machines — take my beloved trucks, or equally, passenger cars — operate with a decentralized network of many system and domain control units, connected through gateway controllers in a specific architecture.
This represents enormous complexity — and it demands a clear methodology from developers to fully grasp the system as a whole and systematically bring it to life.
That methodology is called Systems Engineering. Remember that term. It will be worth your while.
-
-
The car will then be defined not by driving dynamics, efficiency, or safety, but by the software the customer consciously uses and experiences while driving.
-
For developers, this means one thing concretely: welcome to the world of parallel processes.
The hybrid development process will continue to exist — as I outlined in my last article.
-
I frequently encounter the view that in the future, only the agile process will remain — that the classical development approach will simply die out like the dinosaurs.
I consider this a dangerous misconception — and I want to raise your awareness of it.
Ask yourself, deliberately, with every development initiative: What is this actually about?
Is the physical substance of the product evolving — or is this “only” about the application layer?
Those who can answer that question clearly gain immediate clarity about the right approach — and dissolve a great deal of apparent complexity in the process.
-
-
Centralized storage and processors in data centers somewhere in the world, with which machines exchange information in real time via wireless connection over the air.
The machines are talking to each other. And they’re learning as they go.
-
But — and this matters to me — these systems do not emerge on their own.
Many more years of human developers will be needed to design, implement, and make these architectures usable.
-
-
Projects will still exist in the future — but they will look very different from the ones we know today.
What lies ahead is a coexistence of hybrid and purely agile approaches, operating side by side.
15 more annotations...
-
7 Factors That Drive Returns on AI Investments, According to a New Survey on Mar 20, 26
-
Organizations are spending significant dollars on AI. According to
one estimate, U.S. companies spent $37 billion in 2025 for generative AI alone. Increasingly, senior executives and boards are asking about returns and there are looming consequences for leaders that don’t have good answers.
-
45% of respondents said they are achieving a great deal of value from AI and an additional 45% said they are achieving moderate value. Only 9% indicated that their organization is achieving a small amount of value, and virtually zero (0.2%) said they were achieving no value.
-
The great majority of respondents to the survey say they are getting value from AI. How they define that value, however, can differ quite a bit. For example, 14% of respondents report receiving a great deal of value from AI, but only slight return on investment in the technology. Similarly, 9% report moderate value but substantial ROI.
-
Most organizations focus on AI value from internal process improvements. Several of the executives we interviewed, however, were equally or more focused on AI in customer offerings.
-
Generative AI dominates media coverage, but it’s not what most organizations find most valuable. Among our survey respondents, 50% said their companies get the most value from analytical AI, such as dynamic pricing or customer targeting. Rule-based AI, often found in anti-money-laundering systems, insurance underwriting, and healthcare clinical decision support, as well as in robotic process automation was a close second; 40% of respondents said these tools produced the most value. Only 9% choose generative AI, and just 2% agentic AI, though these have, of course, only been in wide use for a few years.
-
Whether custom-built or drawn from the management literature, a structured approach to moving AI from idea to production to measured value is often essential for creating value. Ally Financial, the American bank holding company, has a custom “AI playbook” that guides its business lines from use-case exploration through responsible production deployment. In another case, an electrical utility uses the “stage gate” approach—more common in R&D—to manage the same journey.
-
Most organizations assign AI value accountability to chief data/analytics/AI officer (38%) or individual functional executives (35%). Only 2% assign it to the CFO, but when they are responsible for achieving AI value, 76% of organizations in our survey reported that they achieved a “great deal” of value. That compares to 53% under CIOs or CTOs, and only 32% under functional executives. The finance function brings rigor, credibility, and organizational authority that other roles often lack.
-
There is a two-tier challenge: 58% of organizations haven’t trained employees in AI productivity and tool use, while 29% acknowledge leaders lack the understanding to drive AI value creation. Organizations that invest in both employee upskilling and leadership AI fluency see a 23-percentage-point advantage in value realization.
-
These models predict meaningfully different levels of value achievement. This economic maturity models is based on three components.
-
The first is simply to put AI systems into production
-
A second component of the maturity model is to assess the value of production use cases,
-
The third component of the maturity model is to aggregate value across the organization and report it, at least informally.
-
An AI Economic Maturity Model
-
-
-
-
-
-
-
The path to AI value is not primarily a technical challenge—it is a management one. Every organization now has access to AI technology, but only some will deploy it in ways that generate real and measurable economic returns. These seven factors, and especially the maturity model, offer a practical roadmap for becoming one of them.
18 more annotations...