The AI paradox: Master it - but don't use it?
As schools and companies rush to train people in artificial intelligence, they still punish those who use it “too much.” Tech titan Ronni Zehavi weighs in.
By Ronni Zehavi and Dan Perry
A reminder: Paid Subscribers keep this publication going. It costs $1.54/week and we cannot do this without you. Please consider becoming one. If you can’t subscribe, please share this article with your family and friends.
There may be no clearer sign of our era’s cognitive dissonance than the way we treat artificial intelligence. Walk into any boardroom, newsroom, classroom, or training seminar, and you’ll hear the same urgent message: learn AI or fall behind. Entire industries are being reshaped by it, and institutions are scrambling to prepare their people. At the same time, however, the very act of using AI — especially in writing or ideation — can still be taboo, often attached to an ill-defined question of degree.
A student who submits a clearly AI-assisted paper may face discipline. An employee who drafts a report with ChatGPT risks reputational damage if discovered. AI is hardly a secret — but using it, to a weird extent, somewhat is. This is a paradox with real consequences, and it reveals something deeper: We have not yet come to terms with what AI is for, what it should and shouldn’t do, and how we value work, skill, and originality in an AI-integrated world. We are stuck in a transitional moment, asking people to learn tools they’re often not supposed to admit using. That tension — between the desire for progress and the need for authenticity — is proving difficult to resolve.
Business schools offer courses on prompts. Employers from McKinsey to Deloitte are reskilling workforces: Wall Street training analysts to use AI for modeling and compliance; law firms for case summaries, healthcare systems for diagnostics; marketers for content. A school or company that ignores AI risks irrelevance. Newsrooms are struggling with the question as well.
Yet when that same student or employee submits a polished, AI-assisted piece of work, the reaction is often suspicion. “Did you write this?” becomes an accusation rather than a question. If the answer is “not really,” consequences can follow, no matter how clever and numerous were the prompts that propelled the work. Schools still threaten expulsion for “AI plagiarism,” and companies implicitly expect employees to hide their use of generative tools — even as they train them to be proficient. What’s going on?
We are applying old standards to a new reality while dealing with the tension between two instincts: The drive for efficiency and the reverence for human originality. Moreover, we have a societal need to judge each other – for grades, hiring performance appraisal, accreditation and promotion. Prompting talent alone seems to be an insufficient palate.
To be fair, institutions are not ignoring the challenge. Universities are refining honor codes. Companies are drafting AI-use policies. Even humanities faculties now explore AI’s intersection with creativity, ethics, and authorship. We will surely make advances in coming years.
It seems clear that AI should not be embraced without scrutiny. When students use AI to bypass the learning process, they lose vital skills. When workers rely too heavily (as in: totally), they risk diminishing their judgment and expertise. And in high-stakes fields like law, medicine, or journalism, errors in AI-generated content (whose frequency can be expected to decline) can have serious or even catastrophic consequences. There’s also trust: presenting AI-created work as one’s own can feel deceptive.
These concerns are real and important, but they should not lead us to demonize AI use across the board. Instead, we should refine how we evaluate effort and originality. We should develop standards that recognize degrees and types of AI involvement. If a student uses AI to brainstorm, organize, or refine an essay, that’s a world apart from using a tool to generate the entire submission. Likewise, an employee who summarizes a complex legal document with AI may still contribute valuable insights and strategic framing. The tool may assist, but the thinking is still human.
Transparency is key. Encouraging people to disclose how they used AI allows for honest conversations about what counts as misuse versus smart augmentation. Institutions — academic and professional — can lead the way by creating environments where such disclosures are normalized and even rewarded. Instead of asking “did you use AI?” we should ask: How did you use it? What parts did you contribute? What judgments did you make?
We need to treat AI collaboration as a skill in its own right – an understanding that is already occurring. Knowing how to prompt, question, verify, and reframe AI output is not trivial. It requires creativity, clarity, and critical thinking. Just as using a calculator doesn’t mean someone can’t do math, using a chatbot doesn’t mean someone lacks ideas. In fact, using AI effectively — balancing its capabilities with your own judgment — will become a hallmark of high competence in the near future.
That said, limitations are critical in education especially. No system can function if it doesn’t assess whether students can perform basic tasks on their own. Grades and degrees must reflect a person’s actual knowledge and capabilities – and in most fields a merely rudimentary knowledge combined with AI skills should not be enough. That means maintaining testing environments where no digital assistance is allowed: no AI, no devices, no auto-complete. Not because we fear AI, but because we need baselines for unassisted human performance. Just as a pilot must demonstrate the ability to fly without autopilot, a student must be able to reason and write without AI.
If we abandon those baselines, we risk something greater than declining standards. We risk allowing AI to become a counterfeit version of ourselves — standing in for skills and thought processes we no longer bother to develop. A society full of people who rely on tools they don’t understand or control is not empowered. It is a hollow and brittle one.
So we need to evolve our cultural expectations around authorship and effort. That means distinguishing between mindless automation and thoughtful collaboration. AI is here to stay, and the institutions that figure out how to integrate it with integrity will have a distinct advantage. If we get this balance right, AI can make us sharper, faster, and more capable. If we get it wrong, it will yield mediocre humans with diluted standards. The choice is not between mastery and honesty. We need them both.
Veteran entrepreneur Ronni Zahavi is the founder and CEO of HiBob, an HR technology innovator and AI-driven unicorn.
Welcome Mr. Zahavi! Looking forward to future articles containing valuable insights into AI and the new world!
It has been proven that AI can both lie and just make things up. This along with the fact that it never shows the "thought" or logical process steps it took to create the content, or the source material used makes it dangerous. One view of AI is that it is just a super librarian, searching for material instantly instead of requiring laborious multiple searches by a human. Its major flaws are that it provides material with no validation of what it produces and with the lying and making things up, it can be disastrous if mindlessly used by lazy people. Lastly, its use can slowly degrade the users cognitive abilities as they no longer have to think or problem solve.