In June 1975, BusinessWeek published a report titled "The Office of the Future." The office, the text said, was about to be "revolutionized" by terminals, keyboards, and electronic files. In a section that today might bring a smile due to its stark solemnity, a consultant from the company Arthur D. Little commented that "by 1990, most record management will be electronic." However, what is interesting is that the same article included a warning that is often forgotten when a technology becomes fashionable: changing the way people work "always takes longer" than we imagine. The story of the "Paperless Office" was not a total failure (today we manage documents in ways that were unthinkable 50 years ago), but it was a very hard lesson about productivity, as digitizing does not equate to simplifying. This was pointed out in 2002 by Abigail Sellen and Richard Harper in their study "The Myth of the Paperless Office," where they indicated that, in organizations, the use of email was associated at that time with an average increase of 40% in paper consumption. The technology that was supposedly meant to save work ended up, in many cases, creating more versions, more prints, more copies "just in case," and more administrative work. The promise was not a lie; it was incomplete. Today, with artificial intelligence (AI), we are likely repeating that pattern: believing we are buying productivity, when sometimes what we acquire is complexity disguised as a shortcut.
It would be a mistake to claim that AI does not elevate productivity. The micro evidence (the kind that is day-to-day, in concrete tasks) is undoubtedly real and increasingly robust. In a large-scale study at a customer service center, the National Bureau of Economic Research (NBER) found that generative AI assistance raised average productivity by 14% (measured as cases resolved per hour). What is most interesting is how the benefit was distributed: less experienced and lower-performing workers improved much more (up to 34% in cases resolved per hour), while in the more expert group the effect was small and even signs of quality deterioration appeared in some metrics.
Interesting data also emerges related to inclusion. A recent randomized experiment conducted with participants from Argentina was designed to answer an uncomfortable question: Does AI reduce or widen the productivity gap between people with different educational levels? The finding was, seemingly, hopeful. AI increased productivity across all groups, and the effect was greater in the group with lower education; the gap between the two was drastically reduced (the authors estimate that approximately three-quarters of the initial performance difference was closed). However, the study itself warns of something fundamental, which is that the “equalizer” in a specific task does not imply sustained equality if the underlying skills, such as the abilities to evaluate, decide, and learn, remain separate.
These cases could be read as a triumph: more performance, fewer gaps. But that is precisely the point where the illusion is born. Because these results exist within a framework that we tend to take for granted: internet access, basic digital literacy, devices, time to experiment, relatively formal and measurable work contexts. And there, unfortunately, most of the world does not live. If you are in a small or medium-sized enterprise and someone tells you, 'with AI we will double productivity,' what would the response be? Honestly, few would ask for a timeline, a process, and a metric. But there is already preliminary data on this, and it is less impressive than we imagine.
Daron Acemoglu, in another NBER study, estimates that the effects of AI on total factor productivity (TFP) should not exceed 0.66% accumulated over ten years; he even mentions that this figure may be inflated because much early evidence comes from relatively easy tasks, not from the hard and contextual tasks where mistakes are costly in a business. In that same logic, he estimates that the boost to GDP would also be modest, around 1% over ten years, depending on investment assumptions. This aligns with some conclusions from the OECD, although it relates this to adoption segments. In a low AI adoption scenario (23%), he projects a 0.24% increase in annual TFP growth over a decade, while in more favorable scenarios, the increases will be around 0.61%.
For its part, the International Monetary Fund brings to the table another important element in the AI equation, which is the factor of inequality, which is the trigger with which AI becomes productivity for some and not for others. The IMF concludes that the impact of AI will be unequal depending on sector exposure, preparedness, and access to data/technology; in its analysis, AI tends to intensify inequality between countries and disproportionately benefits advanced economies. So let’s not expect a simple, quick, and universal leap: productivity does not come from installing a tool; it comes when a society, a sector, or a company reconfigures habits, processes, incentives, capabilities, and responsibilities. Without fear of being wrong, to this day, all the mass layoffs that have occurred in the world under the banner that "they are the conclusive result of the advancement of AI in their organizations improving productivity" are simply a lie: they were really the natural effect of accumulating inefficiencies without having made timely decisions.
Thus, digital inclusion is the great filter that decides who enters the future. With 2.2 billion people without connectivity, their challenge is to have data, constant electricity, a device that is not shared among five people, and the time to use it. This explains why 24% of internet users in high-income countries used ChatGPT, compared to 0.7% in low-income countries. Material, educational, and language conditions filter adoption, not a lack of curiosity. This situation, when we focus on Mexico, reveals another truth: while Mexico City and Sonora reported in 2024 the highest percentages of households with internet (84.4% in both), the lowest percentages were in Guerrero (58.9%), Oaxaca (55.5%), and Chiapas (50.7%). With that figure on the table, it seems almost frivolous to think that the central discussion is whether AI will 'replace' us; before that, a significant part is not even in the digital conversation. The same happens in the labor market. If more than half work informally, the daily work life of many households revolves around unstable incomes, in-person procedures, intermittent education, unpaid care, and sporadic technological learning. It's not that AI doesn't affect them; it's that their urgent problem is not replacement, but real access to capabilities and opportunities. Digital inclusion, then, is not a parallel issue to productivity, but the condition to be able to open up to possibilities.
With the above in mind, let's think provocatively but practically. Artificial intelligence is not replacing people in general; it is replacing (or rather, compressing and reconfiguring) specific tasks within a relatively small segment of the global labor market, especially formal, connected, and digitally literate; and in doing so, it reveals what we did not want to see: that the great divide is not "humans vs machines," but "included vs excluded." With this, for a company, the focus changes. The question is not how many jobs will we eliminate? but what part of our work is repetitive, what part requires judgment, and what part involves human relationships? AI often yields better results when it acts as scaffolding to accelerate learning and standardize best practices (as mentioned earlier, this is why it benefits novice profiles more in certain environments), but it can currently hinder when used as a substitute for judgment, responsibility, or contextual knowledge.
In human terms, if we care about digital inclusion, the most viable (and at the same time honest) path should be more of a social infrastructure agenda than a technological trend. Basic digital literacy for adults and youth, practical training within companies of all sizes, designing tools in clear language and in our language, adoption through processes, and an explicit goal of participation. If this is not done, AI may close gaps among the connected, but it will widen the distance with those who remain outside. Thus, we return to the mirror of 1975. Productivity does not fail because technology is useless; it fails because we sell huge promises and omit the slow work: changing habits, closing gaps, building capacities, sharing responsibilities. The “Paperless Office” did not arrive as a slogan, but rather came (with certain nuances) as a result of sustained discipline. Productivity with AI will arrive similarly; less as something magical or prophetic, perhaps more as a collective project that decides who enters, who learns, and who leads.