Recent Bookmarks and Annotations
-
Asset Manager Warns That OpenAI Is Likely Headed for Financial Disaster on Jan 27, 26
-
For one,
experts have pointed out that OpenAI’s business fundamentals are inherently different from those of its competitors like Google. These legacy businesses can tap existing revenue sources to bankroll their major AI capital expenditures.
-
The Sam Altman-led OpenAI, however, has raised record amounts of cash and has vowed to
spend well over $1 trillion before the end of the decade without the advantage of an existing business that generates ongoing revenue.
-
The gap between the AI industry’s promises of a human-level AI-driven future and reality, in other words, has never been wider much like the enormous gulf between AI company valuations and their lagging revenues.
-
As former Fidelity manager George Noble, who has spent decades in asset management, notes in a lengthy tweet, the company may already be “FALLING APART IN REAL TIME.”
“I’ve watched companies implode for decades,” he wrote. “This one has all the warning signs.”
-
-
we may have hit a point of diminishing returns in which each new iteration of the same AI model provides smaller and smaller benefits.
-
Noble said. “It’s going to cost 5x the energy and money to make these models 2x better.”
-
“The low-hanging fruit is gone,”
-
AI hype cycle is peaking” and that “diminishing returns are becoming impossible to hide”
-
I’m not touching OpenAI-adjacent plays at these valuations” since the “risk profile is astronomical.”
-
In a
separate tweet, Noble compared
Altman losing his cool during a podcast appearance last year when asked about the company’s eyebrow-raising financials to Enron’s former CEO Jeffrey Skilling, who called an analyst an “asshole”
-
-
“OpenAI failure wouldn’t be an indictment of AI It would be merely the end of the most hype-driven builder of it,” he wrote.
-
As the
Wall Street Journal reported at the time, the CEO urged staffers to focus on improving ChatGPT, even at the cost of delaying other projects, as Google continued to play a successful game of catch-up.
-
“OpenAI is a cash incinerator,” he added. “The product is losses for investors.”
13 more annotations...
-
(1) Agentic AI Is Not AI - by Jean-Paul Paoli on Jan 27, 26
-
Classic AI tells you which customers might churn—you decide what to do about it.
-
GenAI drafts the retention email—you review and send it
-
Agentic AI decides whether to send it, to whom, and follows up based on responses.
-
Organizations that can’t distinguish between their three AI beasts will struggle to govern the fastest-growing one.
-
Classic AI needs data scientists and Machine Learning engineers who build and validate models. Statistical foundations. Understanding of bias and drift. This talent shortage is real but well-understood, it’s been an “AI skills gap” conversation for the past decade.
-
Generative AI needs what swyx calls “AI Engineers”, aka people who wield foundation models through APIs rather than build them. Many have never taken an Machine Learning course.
-
Agentic AI needs something that barely exists yet: orchestrators who can redesign organizations around autonomous systems. Not just technical fluency but organizational design capability.
-
An ML engineer skilled at fraud detection models isn’t automatically qualified to design agent orchestration. A prompt engineer building chatbots doesn’t necessarily understand model risk governance. Skills don’t transfer automatically across paradigms.
-
So Agentic AI might automate or redesign a workflow using GenAI. Who is going to design, program or govern the agents ? There are no established training paths for this. No career ladders. No bootcamps.
-
McKinsey found that 88% of AI users (including creators of some agentic automation) are nontechnical workers.
-
No single team can own all three beasts.
-
-
McKinsey finds fewer than 30% of companies report CEO sponsorship of their AI agenda. And only 21% of enterprises have mature governance models for autonomous agents.
-
Artefact warns of a “shadow management phenomenon”—employees deploying agents without HR-like oversight, because deployment is instantaneous and cost negligible.
-
Not “How do we use AI?” but “What would this function look like if we applied the right paradigm to each problem?”
-
Recognition that your ML engineers, your prompt designers, and your (yet-to-be-hired) orchestrators are solving fundamentally different problems.
-
Staff for each beast. Your ML engineer who builds churn models may not be the right person to design agent orchestration.
-
Govern for each beast. Point-in-time model validation works for predictive AI. Real-time content guardrails work for GenAI. Continuous behavioral monitoring—with clear escalation paths—works for agents.
-
Move at each beast’s speed. Predictive AI moves at model-development pace (months to years). GenAI moves at application-development pace (weeks to months). Agentic AI moves at organizational-change pace (quarters to years).
17 more annotations...
-
Continuous improvement programs on Jan 27, 26
-
Most continuous improvement programs don’t fail because the methods are wrong.
Lean,
Six Sigma, and
operational excellence (OPEX) principles are well established. Instead, the programs fail because they are designed as initiatives rather than as operating systems.
-
While initiatives rely on sponsorship, cadence, and attention, operating systems rely on defaults, incentives, and habits.
-
When executive focus or business priorities shift (as inevitably happens) initiatives decay. Operating systems persist.
-
Traditional continuous improvement programs follow a familiar pattern. Senior leadership defines a methodology, launches it with visible sponsorship, certifies practitioners, and tracks adoption through periodic reviews. Early results are often positive. Participation is high, metrics improve, and a key project or two lands. Then, quietly, the system weakens.
-
Top-down
continuous improvement assumes that improvement is something people opt into rather than something embedded in how work gets done.
-
Improvement becomes an additional task, not part of daily execution.
-
First, improvement competes with delivery under pressure, and delivery always wins. Second, accountability is diffuse: leaders sponsor, experts facilitate, teams participate, but no one owns improvement as a daily obligation. Third, the system depends on sustained executive attention, which is a scarce and volatile resource.
-
However, Six Sigma remained an overlay on top of existing work rather than a redesign of how work was planned and executed. Managers were accountable for outcomes, not for maintaining the system.
-
The lesson is not that Six Sigma or similar frameworks are flawed. It is that methods cannot compensate for weak structural integration.
-
I created an organization-level continuous improvement program with one objective: embed continuous improvement into the team’s DNA.
-
Simple at first – name of the project, what the project would fix, and level of complexity to execute
-
The launch was me communicating and incenting (via rewards and recognition) individuals identifying viable projects.
-
I was convinced that ‘embedding continuous improvement into the team’s DNA’ needed to involve creating the habit of continuous improvement, so incenting the behavior of finding project opportunities seemed on point.
-
I spent the time really convincing the leaders under me that this approach was the right one. It was also controversial with my VP and his peers, who vocally advocated for a monetary savings goal in year one. My push-back about creating habits (e.g. making it part of the operating system) won.
-
-
The leader’s job in continuous improvement is not to champion the program. It is to provide and reinforce clarit
-
This distinction becomes even more important as
artificial intelligence (AI) accelerates execution. Faster systems amplify structural weaknesses. If improvement depends on human attention and goodwill,
AI will expose that fragility quickly.
-
Most organizations already know what to improve. The harder question is whether they are willing to redesign how work actually runs so improvement becomes unavoidable. That is not a cultural challenge. It is a design choice.
16 more annotations...
-
« Un SIRH qui se limite à gérer de l’administratif ne suffit plus » on Jan 23, 26
-
les outils ne peuvent plus se limiter à gérer, ils doivent désormais accompagner, éclairer et structurer l’action RH au quotidien
-
pour rester utiles, les solutions RH doivent dépasser le cadre traditionnel et s’inscrire dans une approche globale, au service de l’expérience, de la performance et de l’impact des organisations.
-
D’abord un enjeu de performance et d’organisation : comment simplifier, automatiser, sécuriser les processus pour gagner en efficacité dans un contexte plus contraint.
-
un enjeu d’expérience : comment faire vivre un projet collectif, une culture, des valeurs, alors que le travail hybride diffuse l’entreprise et rend le rôle du manager plus complexe
-
un enjeu d’impact et de projection dans le temps : comment donner du sens, inscrire l’action de l’entreprise dans une trajectoire durable, humaine et lisible.
-
Il ne s’agit plus seulement d’administrer ou d’accompagner, mais d’orchestrer ces équilibres. Les outils jouent ici un rôle clé, non comme une fin en soi, mais comme des leviers de simplification et de cohérence, au service à la fois des équipes RH, des managers et des collaborateurs.
-
Le SIRH a longtemps rempli sa mission première : organiser, sécuriser et automatiser les processus RH.
-
Un SIRH classique gère des processus et collecte des données. Or collecter n’est pas agir. Produire de la donnée ne crée pas mécaniquement de la valeur.
-
Les enjeux actuels des entreprises dépassent largement l’administration RH. Elles attendent désormais des outils capables d’orchestrer, de donner du sens et d’accompagner les acteurs dans la durée.
-
Un SIRH qui se limite à gérer de l’administratif ne suffit plus. Il doit évoluer vers une approche plus globale, capable de soutenir les postures managériales, d’aider à la prise de décision et de relier les processus à une création de valeur réelle, humaine autant qu’organisationnelle.
-
l’innovation ne peut plus venir d’un empilement d’outils. Elle vient de la vision. D’une autre manière de penser le rôle du SIRH dans l’entreprise.
-
On parle beaucoup d’engagement, surtout côté dirigeants et DRH, mais pour les collaborateurs, le sujet est souvent abstrait.
-
d’un outil de gestion à une plateforme globale, interconnectée, capable d’adresser à la fois les besoins opérationnels, stratégiques et humains.
-
D’aider les dirigeants, les DRH et les managers à prendre du recul sur leur quotidien, à s’inspirer d’autres pratiques, à comparer, benchmarker, partager des retours d’expérience
-
: permettre aux entreprises de concilier durablement performance, expérience et impact.
-
L’objectif n’est plus de passer du temps à faire fonctionner la mécanique, mais de s’en affranchir.
-
Les équipes pourront se concentrer davantage sur l’analyse, l’anticipation et l’accompagnement, plutôt que sur l’exécution.
-
Les SIRH de demain devront les aider à mieux comprendre ce qui se passe dans leurs équipes, à objectiver des situations parfois intuitives, à savoir où agir et comment.
-
Enfin, les collaborateurs eux-mêmes seront davantage acteurs. En créant des espaces d’échange, de feedback et de coopération, les SIRH contribueront à renforcer l’entraide, le partage d’expérience et la montée en compétences,
-
passer d’une fonction RH mobilisée par l’opérationnel à une fonction RH augmentée, capable d’éclairer, de soutenir et de structurer l’action collective au quotidien.
18 more annotations...
-
Today’s ponderable: What does it feel like to meet your digital twin for the first time? - Digital Workplace Group on Jan 23, 26
-
Credibility through experience Clients trust us to guide them through complexity. How can we advise on AI-driven transformation if we haven’t experienced it ourselves?
-
Agility as a competitive edge Large firms often move slowly, constrained by scale and bureaucracy. Digital workplace teams, like boutique consultancies, thrive on agility. By embracing AI early, we can prototype, iterate and pivot faster,
-
Human + machine = exponential value AI doesn’t replace expertise; it augments it. Imagine a consultant who can instantly surface patterns from thousands of data points, draft strategies in minutes and personalize recommendations at scale
-
Adviser as sense-maker In a world awash with data and rapid technological change, a strategic adviser’s role as a sense-maker becomes even more vital
-
Consultant as client zero: By being the first to adopt and experiment with digital twin technology, consultants serve as their own test case by learning firsthand what works, what doesn’t, and where the real value lies.
-
- What does it mean to trust a machine with my voice, my ideas and my reputation?
- How do I balance efficiency with authenticity?
- Can technology help me be more human, not less?
-
The answers lie in curiosity and courage.
-
Early adoption goes beyond just chasing shiny objects. It demonstrates responsible leadership and includes testing, learning and sharing what works (and what doesn’t),
-
- Onboarding and career development New hires could interact with a digital twin of their future role or team to understand workflows, culture and expectations before day one. Personalized learning paths could be guided by a twin that adapts to individual progress.
- Leadership development and coaching Executives could use digital twins to simulate decision-making scenarios, test communication strategies and receive feedback based on real-world data, accelerating leadership growth.
- Meeting continuity and knowledge capture A digital twin could attend meetings on behalf of an employee, summarize discussions and flag action items. This ensures continuity and reduces the risk of knowledge gaps when people are unavailable.
- Operational modelling and risk management Large corporates could create organizational twins to simulate the impact of restructuring, mergers or policy changes, helping leaders anticipate outcomes before making decisions.
- Employee well-being and workload balancing Digital twins could monitor patterns in work habits and suggest adjustments to prevent burnout, while respecting privacy and consent.
- Process optimization and resource planning Digital twins could model internal workflows – such as project timelines, resource allocation and cross-team dependencies – to identify bottlenecks and improve efficiency across the organization.
-
- Collaboration and knowledge flow Could digital twins act as intelligent proxies in meetings or projects, ensuring continuity when people are unavailable? Explore how they might capture institutional knowledge and make it accessible in real time.
- Decision support and scenario planning Digital twins can model complex systems – from supply chains to organizational structures. Leaders should ask: How can these simulations inform strategic decisions without replacing human judgment?
- Ethics, trust and transparency What boundaries need to be set around identity, data and representation? Employees must understand how their digital twin is created, what it can do and where control lies.
- Integration with existing platforms How will digital twins fit into current digital workplace ecosystems? Consider interoperability with collaboration tools, HR systems and analytics platforms to avoid creating silos.
- Change management and culture Introducing digital twins isn’t just a technical shift – it’s a cultural one. Leaders should plan for communication, training and governance to build confidence and adoption.
8 more annotations...
-
OpenAI identifie un écart croissant entre capacités des modèles d’IA et usages réels - IT SOCIAL on Jan 22, 26
-
Ce constat s’appuie sur l’étude « Ending the Capability Overhang », publiée par OpenAI en janvier 2026. L’analyse repose sur les données agrégées de plus de 800 millions d’utilisateurs hebdomadaires de
ChatGPT, couvrant plus de 70 pays et tous les profils d’usage, du grand public aux entreprises.
-
La majorité des utilisateurs exploite à peine une fraction des fonctionnalités avancées aujourd’hui disponibles.
-
« L’accès est le ticket d’entrée. L’agentivité est ce qui transforme cet accès en impact réel », résume le rapport.
-
Selon l’étude, la durée des tâches accomplies de façon fiable double tous les sept mois.
-
Les capacités, à elles seules, ne produisent aucun impact tant qu’elles ne sont pas intégrées dans des flux de travail réels
-
Les utilisateurs avancés exploitent sept fois plus de capacités de raisonnement que la médiane, alors qu’ils disposent du même accès. Ceux qui délèguent des tâches complexes, combinent plusieurs étapes et recourent à des outils spécialisés tirent le maximum de l’IA, tandis que la majorité reste cantonnée à des usages ponctuels.
-
Les pays les plus avancés utilisent trois fois plus de capacités de raisonnement par personne que les autres. La fracture se creuse particulièrement dans le codage, l’écriture professionnelle et l’analyse de données,
-
l’inverse, les usages basiques restent largement homogènes, preuve que la différenciation porte sur la profondeur d’intégration, pas l’accès initial.
-
l’intégration de fonctionnalités avancées — analyse de données, agents, automatisation — se traduit par des gains de temps nettement supérieurs.
-
Le « capability overhang » impose désormais une réalité nouvelle. L’enjeu n’est plus d’accéder aux outils, mais de transformer les organisations, les processus et les réflexes professionnels pour exploiter des systèmes complexes et accepter la délégation d’une partie du travail.
8 more annotations...
-
Agentic AI pilots - PEX Network on Jan 22, 26
-
Most agentic artificial intelligence (AI) pilots fail: Over 40 percent of agentic AI projects are canceled due to unclear value and high risk.
-
Unrealistic goals and plug-and-play assumptions derail adoption.
-
Data readiness is critical:
-
Cross-functional teams, training, and clear ownership drive adoption.
-
Agentic AI must align with workflows and legacy systems to scale.
-
Vendors are promising game-changing automation, yet, despite the excitement, the hard reality is sobering: most agentic AI pilots fail to deliver real value.
-
Gartner
forecasts that more than 40 percent of agentic AI projects will be canceled by the end of 2027 due to high costs, unclear business value, and insufficient risk controls.
-
The technology itself – grounded in advanced large language models (LLMs), multi-step reasoning, and tool integration – has enormous potential.
-
1. Unrealistic expectations
One of the most common root causes of failure is expectation misalignment. Many companies embark on agentic AI initiatives, with headlines in one hand and wishful thinking in the other.
-
t, when the rubber hits the road, the reality is that these systems are still nascent and require careful framing.
-
-
2. Data chaos
AI functions on data, and agentic systems are no exception. They need structured and reliable data to reason, plan, and ultimately act. When agents lack access to data, their decisions become unreliable, feedback loops fail, and pilots stall.
-
84 percent of organizations cited business risk, 80 percent cited lack of transparency, and 66 percent cited regulatory concerns
-
- Conduct a data audit before building agents.
- Create governed, searchable, and lineage-tracked data sources.
- Connect agentic systems to enterprise knowledge graphs or indexed repositories.
- Prioritize API-first architectures for clean and standardized data access.
-
3. Change management failures
Technology is advancing faster than organizations are able to absorb it. Agentic AI sits at the intersection of multiple disciplines – AI engineering, process design, data governance, risk management, and change management. Yet, many pilots are staffed narrowly, often driven by innovation teams or IT functions with limited business ownership.
-
Business teams, meanwhile, struggle to trust or adopt systems they didn’t help design. As a result, pilots tend to technically “work” but don’t often become operationally essential.
-
-
Key practices include:
- Building genuinely cross-functional teams that include process owners, frontline employees, risk leaders, and technologists from day one.
- Upskilling beyond data science. Teams need training in agent design, human-in-the-loop workflows, AI governance, and operational monitoring – not just prompt engineering.
- Assigning clear ownership for each agent, similar to a product owner role, responsible for performance, ethics, and outcomes.
- Designing agents as collaborators, not replacements. Early use cases should augment human decision-making, not bypass it.
-
4. Integration failure
Building an intelligent agent is no longer the hard part. Integrating it into the enterprise is.
Most agentic AI pilots fail not because the agent can’t reason or plan, but because it is dropped into an environment it was never designed to survive in: fragmented systems, brittle workflows, and decades of accumulated technical debt.
-
Sucessful organizations flip the model. Instead of asking 'how do we plug an agent into our system?' they ask, 'how do we make our systems agent-ready?'
18 more annotations...
-
Forrester Analyst: AI Has Failed To Move The Needle On Global Productivity - Dataconomy on Jan 21, 26
-
Annual productivity growth registered 2.7% from 1947 to 1973, then declined to 2.1% between 1990 and 2001, and further to 1.5% from 2007 to 2019.
-
Forrester’s recent
research on AI job replacement projects that AI could displace 6% of jobs by 2030, totaling approximately 10.4 million positions.
-
Gownder also discussed the effectiveness of AI implementations within large organizations, noting that “a lot of generative AI stuff isn’t really working.”
-
McKinsey data showed similar findings with approximately 80% of projects failing to deliver value.
-
He clarified that recent large-scale job cuts were primarily financial decisions, not AI-driven,
-
He sees a parallel with AI, where outsourcing, due to cheaper labor, can sometimes be misattributed as an AI-driven job loss.
4 more annotations...
-
Quand l’IA fait gagner du temps mais au détriment de la qualité dans les organisations - IT SOCIAL on Jan 16, 26
-
La quasi-totalité des salariés interrogés déclarent économiser entre une heure et sept heures par semaine grâce à l’
IA, et plus des trois quarts estiment être plus productifs qu’il y a un an.
-
Workday montre que la majorité des entreprises se trompe d’indicateur.
-
Mais lorsque l’on intègre le temps consacré à corriger, à clarifier et à réécrire les productions de l’IA, plus d’un tiers du temps gagné disparaît.
-
Un gain de dix heures de productivité se traduit, en moyenne, par près de quatre heures passées à réparer la sortie de l’outil. La productivité apparente progresse, mais la valeur nette se réduit.
-
Elle découle d’un modèle d’adoption où l’IA s’ajoute aux processus existants sans redéfinir ce qu’est un travail de qualité
-
L’IA accélère la production, mais transfère la charge de la qualité vers les salariés, qui deviennent les correcteurs permanents de systèmes encore instables.
-
. Les utilisateurs les plus enthousiastes et les plus intensifs supportent une part élevée du coût caché de l’IA.
-
Une large majorité de ces utilisateurs contrôle les productions de l’IA avec au moins autant de rigueur que celles de collègues humains, ce qui transforme chaque gain de vitesse en charge cognitive et en surcharge opérationnelle.
-
À l’inverse, les profils les plus performants se concentrent davantage dans les métiers de l’informatique et du marketing.
-
Dans ces contextes, l’imperfection est tolérable parce que la valeur provient de l’analyse, de l’itération et du recadrage rapide.
-
Une même technologie produit donc des effets opposés selon la nature des processus métiers et selon le niveau d’exigence attendu sur la qualité.
-
Les salariés utilisent des outils récents dans des cadres de travail conçus avant la diffusion de l’automatisation cognitive.
-
Deux tiers des dirigeants déclarent que la formation figure parmi leurs priorités, mais seuls un peu plus d’un tiers des salariés les plus exposés aux reprises constatent une hausse effective de leur accès à la formation
-
Workday met en lumière un arbitrage révélateur. Les organisations réallouent aujourd’hui une part plus importante des gains attribués à l’IA vers la technologie et l’infrastructure que vers le développement des compétences.
-
Cette différence ne relève pas d’une politique sociale, mais d’une logique industrielle.
-
La performance de l’IA ne se pilote plus seulement en heures économisées. Elle se pilote en valeur nette, en intégrant le temps perdu à corriger et les effets sur la qualité des décisions.
-
Dans les opérations, cela revient à suivre le rendement au premier passage plutôt que le volume produit.
-
À mesure que l’IA devient une infrastructure cognitive, la question centrale n’est plus l’automatisation, mais l’amélioration mesurable des résultats.
16 more annotations...
-
The Blame Game: How Bureaucracy Eats… | Corporate Rebels on Jan 15, 26
-
Emails fly. Meetings multiply. Everyone starts digging through procedures and policies.
Not to understand and fix the problem, but to make sure their own hands look clean.
-
But when problems inevitably arise, the focus shifts from solving to blaming.
- Who missed the step?
- Who didn’t tick the box?
- Who’s guilty?
-
Sociologists call this loop the vicious circle of bureaucracy: a self-perpetuating cycle where the initial problems of a bureaucratic system lead to the creation of more rules and procedures, which in turn create new problems and require further regulations.
-
Risk had to travel up a chain of command so rigid that truth got filtered through layers of approval, PowerPoints, and procedure.
-
If you follow the rules, disaster unfolds. If you break them, you’re the one at fault.
-
Initially everyone always wants more of it. Simply because more responsibility equals more power, promotions, and prestige.
Until something breaks.
-
Managers will cite compliance. Frontline workers will blame unclear rules. Executives will cite rogue employees.
-
Because every rule, every layer, every sign-off adds a hidden tax on initiative.
Instead of improving products or serving customers, organizations spend all their energy protecting themselves from blame.
-
What if responsibility wasn’t something to dodge when things go wrong, but something to share?
Progressive organizations around the world are already showing it’s possible.
-
You can only create compliance. And compliance is the enemy of learning.
If we want organizations that adapt and evolve, we need to stop designing for blame and start designing for growth.
8 more annotations...