Recent Bookmarks and Annotations
-
The great misalignment in business transformation on Oct 10, 25
-
For two hours, the discussion centered on architectures, integrations and customer experience. While the content was relatable to us all, there was virtually no discussion about the business.
-
However, it was disappointing because, in two hours, not a single business problem had been solved.
-
business leaders are asking tough questions about the relevance of certain roles. If the business can run profitably without them, were they ever essential in the first place?
-
Advocates argue that workers will inevitably be replaced, while critics frame it as the next wave of technological unemployment.
-
However, AI, tariffs and layoffs don’t explain the whole story.
-
Companies that have downsized over the past few years are still operating profitably. Many have displayed resilience even after reducing thousands of jobs.
-
, we’ve seen an explosion of creative new job titles in the tech sector.
-
Even more striking is how these roles lean towards a form of religion. You’ll often hear a designer preach authenticity, empathy and creativity.
-
Instead of starting with a business problem and finding the right technical solution, many of these roles have resorted to performing arts.
-
Even traditional technical roles, such as business analysts and project managers, have been criticized for having questionable impact.
-
The problem is compounded by the emphasis on being “more artistic” or “more technical.”
-
What is new, however, is the growing realization that some jobs may not need to come back at all.
-
It’s a deeper structural issue: a mismatch between the roles being created and the business problems companies need solved.
-
For years, the corporate mantra has been to find employees who blend creative and analytical traits,
-
Analysis without insight devolves into repetitive reporting, adding noise rather than clarity. Creativity without business grounding drifts into theatre, producing workshops and “innovation sessions” that inspire but fail to deliver results.
-
- Start with the business problem. Every role should have a direct line of sight to a problem worth solving, whether it’s reducing costs, increasing revenue, managing risk or improving customer experience.
- Measure contribution, not random actions. Two hours of technical discussions and drama classes are only valuable if they move the needle on business outcomes. Reporting metrics must shift from volume of work to impact delivered.
- Integrate, don’t isolate. Tech teams must operate as partners with business leaders, not as isolated units pursuing technical perfection.
- Challenge role inflation. Companies should resist the urge to create new titles filled with buzzwords. Instead, they should consolidate around roles that demonstrably deliver value.
-
The true measure of success is not how technical or creative someone appears, but how directly they contribute to solving a real business problem.
15 more annotations...
-
(1) Doing the Work: Why Learning is Key to Agentic AI Success & Avoiding Workslop on Oct 10, 25
-
But in the enterprise, Agentic AI is still mostly at the stage of proofs of concept and simple stand-alone apps, and not yet ‘in production’ as part of the fabric of the digital workplace
-
If we don’t think hard about WHY we are doing work, and what work should look like, we are all going to drown in a wave of AI content.
-
If that doesn’t work, they should just do the work themselves
-
we need to think about what we are doing, why, and how to support people in using technology more productively, rather than just as a lazy shortcut.
-
When organizational leaders advocate for AI everywhere all the time, they model a lack of discernment in how to apply the technology. It’s easy to see how this translates into employees thoughtlessly copying and pasting AI responses into documents, even when AI isn’t suited to the job at hand.
-
the need for more human collaboration as a solution to the workslop problem
-
Workslop is an excellent example of new collaborative dynamics introduced by AI that can drain productivity rather than enhance it.
-
t workslop is really just an amplification of the existing problem of corporate spam.
-
GenAI seems easier to grasp, and also easier to teach, than transforming the way we work.
-
Doing the work of process mapping, value chain analysis and other aspects of AI readiness is a lot harder than just issuing vague instructions
-
Organizational readiness for gen AI starts with making unstructured data and its flows more visible, structured, and strategically prioritized.
-
Some leaders are notoriously short-term in focus, with compensation models that disincentive tackling necessary organisational improvement if it looks like a long-term projec
-
The problem is not so much that sceptics aren’t taking AI seriously, but that too many people want a revolutionary, disruptive technology which doesn’t really change anything.
-
but at least in terms of agentic AI, the difficult question is: does the business really know what it wants?
-
Prompt engineering and requirements engineering are literally the same skill—using clarity, context, and intentionality to communicate your intent and ensure what gets built matches what you actually need.
-
with, and the processes - similar to functions in software - that use them. Knowledge and process geeks have talked about the value of mapping and cataloguing these for years, but I am willing to bet many leaders zoned out in these meetings.
-
By chaining together agents in teams or groups with different roles and specialisations, we can build agentic capabilities that could usefully run and manage whole areas of an organisation’s process work.
-
If we want to free people to do the real work of value creation, then orchestrating and automating the underlying processes and platforms that support this work is a good goal to have in mind.
-
Instead of using GenAI within an existing manual process management system to create more workslop, we can instead work on automating the boring repetitive process work as far as possible and allow people to do real, value creating work.
-
It is quite challenging to develop learning programmes for a topic that seems to evolve every week, but many firms are now doing a decent job of supporting GenAI adoption in the enterprise.
-
agentic AI is not an easily defined skillset that can be taught like prompting.
-
we need to get a better leadership handle on what we are trying to achieve and how to ensure the technology is deployed in service of advancing business capabilities.
-
here are some useful skills and ways of seeing that we can borrow from executive education, such as:
team capability mapping and analysis
process and value chain mapping
requirements and context management
test plans, observability and monitoring
knowledge and data management
combinatorial innovation and design thinking
-
How can L&D facilitate continuous learning for people and agents, ideally together, without resorting to above-the-flow courses, workshops and so on?
Can we help people imagine and create a digital twin of their teams and functions, including both its process landscape and the capabilities of its people?
How can we help people understand and build new process systems run by agentic AI and then explore what they can do with them once they are running successfully?
Can we realistically use personal learning agents to help people find content and tools to further their development whilst in-the-flow of work, and then experience it however they wish?
22 more annotations...
-
SAP bets on human-AI partnership in the workforce of the future on Oct 09, 25
-
“The future of work is about humans and AI together,” said
Gina Vargiu-Breuer, SAP’s chief people officer and labor director.
-
“Skills are the new supply chain. Just like energy or raw materials fuel the industrial age, AI fuels the digital age, and this is powered and shaped by skills.”
-
“AI is breaking apart and bridging job descriptions, so that tasks, not whole jobs, are being automated, while new opportunities are merging around these places,”
-
AI lets people stretch into adjacent territories and skills that scale on top of the foundation of the expertise they already have.”
-
“Recruiters can start looking beyond talent sourcing and into workforce planning.”
-
“This approach actually accelerates adaptability growth by effectively responding to external disruptions in real time,”
-
Technology alone won’t determine success—organizational culture will,
-
“Only organizations that foster growth, resilience and inclusion will be the ones to turn AI into a real competitive advantage.”
-
Work will be less about the toil and more about orchestration and delegation. We’re moving from doing the work to designing the work.
-
“The most valuable skill of the next decade will be the ability to unlearn, relearn and break old digital habits to build new ones,” he said.
-
industries most exposed to AI are achieving a fourfold increase in productivity growth.”
-
The message to HR leaders was that AI will reshape people management, but success depends on viewing it as a partnership tool rather than a replacement threat,
-
: “We are making Joule deeply aware of the person it’s partnering with, their role and the business process context in which they operate,” he said. “There is an AI assistant for every role; an AI assistant for you.”
11 more annotations...
-
Managing the risks of agentic AI on Oct 08, 25
-
. In 2024, a tribunal declared that the chat bot’s statements were “negligent misrepresentation” and awarded Moffatt both the refund and compensation.
-
here’s also the need for training the tool out-of-the-box and potentially retraining or modifying its behavior as it evolves through use.
-
Those exercises need to be accounted for in the initiative to have a fair gauge of the tools return on investment (ROI).
-
The rollout processes for agentic AI need to have checkpoints in place that monitor these costs.
-
AI agents operate off a complex set of instructions and information, but they’re not foolproof.
-
users could inject code into the AI to instruct the agent to behave in a new manner.
-
his potential requires careful cybersecurity processes built into the AI workflow.
-
The phenomenon of ‘AI hallucinations’ is very real and can expose an organization to both reputational damage and legal repercussions.
-
AI will be confidently incorrect.
-
The European Union’s AI Act has been rolling out since mid-2024 and holds businesses accountable for the activity and use of AI in numerous capacities,
-
The key here is in building validation steps into automated processes and AI workflows
-
Where agentic AI is responsible for providing advice or instruction, there should be ‘human-in-the-loop’ processes that ensure correct information is delivered or decisions are signed off before being presented to customers.
-
How does that fit with your organization’s sustainability commitments?
-
how the use of agentic AI reflects on your brand and culture.
-
AI isn’t going away anytime soon, but before we jump on the bandwagon, we should all consider just how ready our business is for it.
13 more annotations...
-
Why AI Won't Save Your Broken System - DAVID NOWAK on Oct 08, 25
-
The DORA researchers identified something they call the “AI mirror effect” where AI doesn’t transform your organization; it amplifies what already exists.
-
he research identified seven distinct team archetypes, from “Harmonious High-achievers” who excel across all dimensions to “Legacy Bottleneck” teams trapped in constant reaction mode.
-
What fascinates me is how this mirrors what we see in other domains.
-
Consider this data point: 30% of developers report little to no trust in AI-generated code , even while using these tools daily.
-
Clear AI stance means not just policies, but organizational clarity on expectations. Without this, developers operate either too conservatively or too permissively.
-
Healthy data ecosystems require high-quality, accessible, unified internal data. AI amplifies data quality issues, so garbage in becomes amplified garbage out.
-
AI-accessible internal data connects tools to your actual systems, repositories, and documentation rather than operating in isolation.
-
Strong version control practices become essential when AI accelerates code generation velocity and you need mature rollback capabilities.
-
Working in small batches helps manage risk when AI increases the speed of change delivery.
-
User-centric focus serves as the north star that prevents AI optimization theater where teams get faster at building the wrong things.
-
Quality internal platforms provide the foundation that makes everything else possible by enabling self-service capabilities and safe experimentation.
-
Organizations with mature platform engineering practices—90% have adopted some form —see dramatically better AI outcomes.
-
it’s not about having platforms, it’s about having quality platforms that serve as force multipliers.
-
I’ve watched too many organizations build internal platforms that become bureaucratic bottlenecks.
-
when developers can self-service infrastructure, experiment safely, and deploy frequently, they can effectively leverage AI’s acceleration.
-
how Value Stream Management acts as a “force multiplier specifically for AI investments”.
-
AI creates what the researchers call “localized pockets of productivity that are often lost to downstream chaos”.
-
Wolters Kluwer’s research on AI-driven value stream management
³ shows that without end-to-end flow optimization, individual productivity improvements rarely translate to organizational value.
-
You can’t optimize what you can’t see, and most organizations lack visibility into how work actually flows from idea to customer value.
-
The research identifies that “default AI usage patterns deliver productivity while blocking skill development”
-
This directly contradicts expectations that AI would make syntax knowledge obsolete, suggesting developers need deeper language understanding to work effectively with AI.
-
Organizations that don’t intentionally design learning opportunities into AI-assisted workflows risk creating future capability gaps that could be devastating when AI tools inevitably fail or change.
-
But the real transformation happens when organizations evolve to AI-native workflows
-
Start with Your Foundation Before Deploying More AI Tools
Audit your data quality, deployment practices, and team communication patterns. AI will amplify whatever you already have, so fix the fundamentals first.
-
Invest in Platforms as Products
Rather than treating internal developer platforms as cost centers that provision resources, treat them as strategic assets that enable AI adoption across your organization.
-
Implement Value Stream Visibility
Understand how work flows through your organization. You can’t optimize AI’s impact without mapping your value streams and identifying where AI can solve system-level constraints rather than just individual productivity bottlenecks.
-
Develop AI Governance That Enables Rather Than Constrains
Create clear policies that provide guardrails without micromanaging. The research shows that unclear expectations lead to either over-conservative or reckless AI usage.
-
Focus on Outcomes, Not Adoption Metrics
Measure whether AI is helping you deliver better software faster to customers, not just whether people are using AI tools or reporting productivity gains.
-
Here’s what the research doesn’t say directly, but implies: most organizations will struggle to realize AI’s full potential because they lack the foundational systems thinking required. They’ll buy tools, run pilots, and celebrate individual productivity gains while missing the systemic changes needed for sustained impact.
-
The organizations that win won’t necessarily have the best AI tools—they’ll have the best systems for learning, adapting, and scaling AI capabilities across their entire value delivery process.
-
“Harmonious High-achievers” represent only 20% of teams, while “Constrained by Process” teams (17%) are burning out despite having stable systems, and “Legacy Bottleneck” teams (11%) are trapped in constant reaction mode.
-
The question is whether you’ll approach AI adoption as tool procurement or system transformation.
-
This points to deeper systemic issues that AI alone cannot solve.
31 more annotations...
-
4 Organizational Red Flags That Turn Off Job Candidates on Oct 07, 25
-
whether or not they want to work somewhere, four key ones emerged: lack of clarity about the job or organization; bad recruitment/selection practices; a prevalence of unpleasant or unhappy people; and high turnover in the position and/or a poor corporate track record.
-
- Clarity: Know what you want and what you offer.
- Courtesy: Treat others with respect, professionalism, and kindness.
- Coherence: Tell a coherent story about past decisions/successes/failures and how they connect to the present moment and to future plans.
-
Companies sometimes fail to convey the basic facts about mission, values, and culture.
-
Multiple teams cited “no discussion about company culture” or “failure to articulate company values and provide examples” as problematic.
-
When details on the role are sparse, it can also create concern. “Lack of clarity in expectations for job responsibilities” was noted as a major red flag, in particular lack of “definitions of success” and success metrics and “unclear or shifting expectations.”
-
- Communicate the big picture internally. To avoid misalignment, have a pre-set list of questions or statements that make the company mission, values, and culture clear.
- Understand hiring goals. Everyone should have a common understanding of the role being filled: why it is vacant, why it was created to begin with, and struggles and accomplishments of those who have held the role.
- Determine the skills needed to do the job, the responsibilities it involves, and what success looks like. Then write a clear job description, competency checklist, and outline of desired outcomes or performance goals. Finally, design interview questions to help uncover how candidates match up to your requirements.
-
a generally “disorganized process,” with a “lack of prep/urgency” and insufficient communication about timing.
-
“blocking you from talking to peers” or “not allowing/encouraging multiple interviews with different people in the organizatio
-
inally, respondents noted “biases (work-life balance, gender roles, ageism)” and “getting too personal” around family life and similar topics.
-
- Clearly outline the process and set a timeline: how many interviews to prepare for, what the flow of each will be, and when to expect a response.
- Train hiring managers and any other interviewers on the parameters of confidential information and what questions can and cannot be legally asked in an interview.
- Check alignment between jobseekers and managers. In an initial interview, you might “ask a candidate to state what they believe the job is.” In subsequent ones, ask what they’ve already discussed and learned.
- Solicit feedback from candidates on how well the entire process was run.
-
One group put it bluntly: no one is excited to work with an interviewer who is “a jerk.”
-
- Support hiring managers. If recruitment comes on top of all their other work, they might be stretched too thin to do it well. Make sure they have time to properly screen and evaluate candidates and conduct in-depth interviews.
- Set expectations for civility. Establish a general protocol for how visitors to the office, including job candidates, are to be treated. Explain and model the positive energy you expect interviewers to bring to their interactions.
- Prioritize morale before a doom loop begins. If people aren’t happy, ensure the organization is taking steps to remedy the
-
“the same position open for a long time or over and over”
-
- Don’t dismiss or downplay legitimate concerns. Address them in the interview process, acknowledging and explaining the scope of problems, which may be more nuanced than the candidate realizes if they are primarily informed by rumor and headlines.
- Ensure that everyone the candidate interacts with is on the same page as to their understanding of the issues and what steps are being taken to remedy them.
- Find out what the candidate’s personal concerns are and ask what would mitigate them.
12 more annotations...
-
Consultants Forced to Pay Money Back After Getting Caught Using AI for Expensive "Report" on Oct 07, 25
-
independent assurance review” bore concerning signs that Deloitte had cut corners, and included multiple errors such as references to nonexistent citations — a hallmark of AI slop.
-
-
the United Kingdom’s six largest accounting firms hadn’t been formally monitoring how AI impacts the quality of their audits, highlighting the possibility that many other reports may include similar hallucinations.
-
“Instead of just substituting one hallucinated fake reference for a new ‘real’ reference, they’ve substituted the fake hallucinated references and in the new version
-
“So what that suggests is that the original claim made in the body of the report wasn’t based on any one particular evidentiary source.”
-
“Deloitte conducted the independent assurance review and has confirmed some footnotes and references were incorrect,”
-
“Anyone looking to contract these firms should be asking exactly who is doing the work they are paying for, and having that expertise and no AI use verified,”
5 more annotations...
-
Cory Doctorow Says the AI Industry Is About to Collapse on Oct 06, 25
-
Doctorow warns of the impending collapse of the AI industry — a hype-fueled financial disaster that he says it’s too late to avoid.
-
“So, you’re saying a third of the stock market is tied up in seven AI companies that have no way to become profitable and that this is a bubble that’s going to burst and take the whole economy with it?” the student asked fearfully.
-
Doctorow answered that the bubble is being propped up by tech mega–corporations who are now begging investors to come aboard, now that their growth potential is slowing to a halt.
-
o court investors, the monopolists are selling a lie that AI can replace human workers — when in reality, AI experiments are failing at 95 percent of companies that attempt them.
-
“AI cannot do your job, but an AI salesman can 100 percent convince your boss to fire you and replace you with an AI that can’t do your job,”
-
“AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations,
-
that the best — and only — thing to do is the “puncture the AI bubble as soon as possible, to halt this before it progresses any further and to head off the accumulation of social and economic debt.”
-
“The most important thing is the investor story and the ensuing mania that has teed up an economical catastrophe that will harm hundreds of millions or even billions of people.
6 more annotations...
-
Is your AI system just a 'yes man'? Understanding this HR risk on Oct 06, 25
-
Since AI is trained on historical patterns, it often amplifies blind spots rather than correcting them.
-
“An AI yes man is a system that tells you what you want to hear rather than what you need to know,”
-
When AI systems reinforce blind spots rather than expose them, Kuehl reminds HR leaders that the consequences can affect entire workforces.
-
In recruitment, filters can rank candidates to match existing staff profiles, reinforcing bias and shutting out new perspectives. “
-
Performance management systems create similar blind spots. “Analytics that simply mirror manager ratings hide favoritism or inconsistent standards,”
-
60% of leaders expect employees to update their skills, roles and responsibilities to adjust to AI’s impact. Yet only 25% of workers say they’ve completed training on how to apply AI at work.
-
“It’s a red flag when a vendor talks about cultural alignment but cannot show how the system surfaces uncomfortable or counterintuitive findings,”
-
limited transparency around training data and bias testing, and overreliance on manager inputs over employee-generated data.
-
. These safeguards should include explainability standards so leaders can trace how conclusions are reached, channels for employees to challenge questionable results and ownership structures that extend beyond HR.
-
Organizations with responsible AI frameworks are seeing measurably better outcomes: Sixty-five percent are actively upskilling their workers in AI, compared with just 51% of organizations without frameworks.
-
He recommends tracing results back to raw data and cross-checking multiple sources. Contradictions between surveys, interviews and exit data often reveal the real issues.
-
- How does this system highlight negative or contradictory findings?
- What steps can detect bias in recruitment, promotion or pay data?
- How often do insights challenge leadership assumptions, and how are they surfaced?
- Can we see examples where the tool uncovered uncomfortable truths rather than confirming expectations?
- What level of data access and explainability will my team have to validate findings?
-
The scale of the challenge is significant: Only 10% of organizations in the Adecco study qualify as “future-ready.”
11 more annotations...
-
La fracture numérique est un problème que personne ne peut ignorer - FredCavazza.net on Oct 02, 25
-
: malgré des promesses de performance et de compétitivité, la défiance et le déficit de compréhension freinent l’appropriation des usages.
-
Le problème est que nous ne manquions pas d’innovation, mais plutôt de temps et de volonté pour les intégrer
-
un phénomène de ralentissement dans l’adoption de l’IA, ou plutôt une adoption beaucoup plus lente que prévue
-
es statistiques qui sont exhibées reposent sur des définitions floues et des questions biaisées. « Utiliser vous l’IA dans votre entreprise ? »
-
les grands médias usent et abusent de sondages biaisés sur l’IA pour pouvoir écrire tout et son contraire,
-
beaucoup plus simple de trouver des excuses pour ne pas utiliser ou s’intéresser aux nouvelles technologies (sécurité, confidentialité, souveraineté…) que de chercher à les comprendre et à les adopter.
-
les derniers chiffres du baromètre de la transformation numérique des petites et moyennes entreprises sont tombés, et ils ne sont pas bons
-
Cette étude nous révèle un recul de la perception des bénéfices du numérique
-
De même, plus de la moitié des sondés ont des craintes relatives à la perte ou au piratage de leurs données, contre 36% en 2020.
-
- Une perception plus faible de la capacité du numérique à aider les entreprises à se démarquer de la concurrence (38% contre 39% l’année dernière) ;
- Une baisse de la fréquence d’utilisation des médias sociaux pour promouvoir l’entreprise ou ses produits (46% d’utilisation quotidienne en 2025 contre 61% en 2023) ;
- Une part des salariés formés au numérique toujours très faible (20% au cours des 12 derniers mois).
-
la part des PME et TPE ayant recours à l’IA reste très faible, pour des usages qui tournent majoritairement autour de la génération de texte ou de la recherche d’information via un chatbot.
-
seulement 44% de la population à confiance dans l’utilisation de l’internet (40% en 2015), tandis qu’à peine 38% ont confiance dans les médias sociaux (35% en 2011 et 36% en 2020).
-
la confiance est au coeur de l’adoption et des usages des nouvelles technologies, d’autant plus avec l’intelligence artificielle qui est de loin la technologie la plus complexe et la plus obscure.
-
ce déficit de confiance envers les nouvelles technologies, et l’IA en particulier, ralentit l’adoption et pénalise les usages numériques, donc les ressources qui y sont associées, donc les emplois.
-
e ne sais pas si c’est un réflexe de survie ou une vengeance par anticipation, mais les médias se délectent des chiffres-chocs et avis tranchés sur les risques que font peser les IA sur l’emploi :
-
Si je ne remets absolument pas en cause la véracité de ces chiffres et avis, je suis en revanche beaucoup plus sceptique sur les conditions de collecte, donc les questions posées pour récolter ces chiffres et avis.
-
Pourtant, nous avons le recul pour comprendre que le tant redouté remplacement par l’IA est une fable, car la réalité du terrain est tout autre.
-
il faut 1/4 seconde pour appuyer sur le déclencheur d’un appareil photo, mais des années pour acquérir les compétences nécessaires à la réalisation de belles photos
-
Dire que l’IA va améliorer la compétitivité des entreprises est un peu comme dire que le développement durable va atténuer les effets du dérèglement climatique.
-
Nous sommes tous d’accord pour croire en cette assertion, mais de nombreuses divergences apparaissent dès qu’il est question de se mettre d’accord sur un délai et des modalités de réalisation, sur des objectifs et indicateurs, sur les ressources à mobiliser…
-
même avec de la pédagogie, la confiance est un sentiment complexe qui repose sur un ensemble de critères objectifs et subjectifs
-
e problème est que le mal est fait, car les signaux contradictoires relayés par les médias (discours ultra-optimistes des éditeurs vs. craintes existentielles des salariés) ont définitivement sapés le moral et la confiance du marché.
-
à chaque nouveau modèle ou nouvelle fonctionnalité, la défiance grandit vis-à-vis de cette intelligence artificielle qui fascine autant qu’elle effraie.
-
Toujours est-il que la fracture numérique s’élargit à mesure que de nouveaux usages s’installent,
-
je vous rappelle que des centaines de milliards de $ ont déjà été investis dans les infrastructures servant à faire tourner les IA générative, des centaines de milliards de $ qui ne seront pas investis ailleurs par des sociétés dont les actionnaires exigent de rentabiliser les investissements au plus vite, donc de faire pression sur le marché, donc de délivrer un discours optimiste, voir alarmiste pour stimuler l’adoption
-
une énorme pression sur les entreprises, et indirectement sur les salariés qui sont sommés d’adopter au plus vite les nouvelles technologies et l’IA pour pouvoir réaliser les gains de performances promis par les éditeurs.
-
Les mauvaises utilisations de l’IA générative nous fait perdre du temps et de l’énergie alors qu’elles sont censées nous en faire gagner
-
le problème ici n’est pas l’IA générative, mais la mauvaise utilisation de l’IA générative, donc le déficit de compréhension et l’absence de maitrise.
-
l’IA générative et les conséquences de sa montée en puissance sont une réalité à laquelle personne ne peut échapper.
-
Il n’est ainsi très clairement pas envisageable de faire comme si de rien n’était, de se contenter d’observer de loin l’évolution des usages numériques et de se laisser « porter par le courant ».
-
Ceci passe nécessairement par une intensification de l’utilisation des outils numériques et de l’IA, mais avec une sensibilisation et une montée en compétences préalables.
-
Tout ce qu’il manque aux entreprises et organisations est l’étincelle qui va leur redonner envie d’investir de l’énergie et des moyens dans les activités numériques et dans l’IA générative, car l’évolution ne se fait que dans un seul sens.
-
Cette étincelle, c’est la formation et l’accompagnement des salariés et managers dans leur appréhension de ce qu’est l’IA et de ce que les modèles génératifs peuvent leur apporter
32 more annotations...