ChatGPT and similar tools have become our co-pilots in a wide variety of tasks: writing emails, proofreading, translating, conducting market research, brainstorming, manipulating documents and data, developing software—you name it. They transform the way we work and, inevitably, will transform the way we teach and learn, if they haven't already.
Schools have adopted different strategies regarding generative AI, ranging from multiple use cases across subjects to a complete ban. So, what's the right way forward? From my perspective, ignoring or banning such a powerful tool and the change it brings in human-computer interaction will cost us too much. Instead of fighting it, we should learn how to harmonically integrate it into study processes and school policies, nurturing its productive aspects and mitigating the dangers of its abuse by students.
It's no secret that most simple computer science assignments can be easily solved by ChatGPT, Code Llama, or a similar alternative. So, if students want to cheat, they can simply ask the AI to solve the assignments for them. This makes cheating easy to execute, and of course, students will not learn much if all they do is copy-paste.
What if there was a tool that limits AI, making it an explanatory tool, that knows the task you are solving, and directs you in the right direction but doesn't solve the entire exercise for you? There is one, and it's called "smart hints" on CodeEasy! We experimented for hours until we achieved the desired behavior so that teachers didn't need to. It works very simply: you write code, encounter a problem that blocks you, click the "hint" button—and boom—AI gives you recommendations regarding your code.
"Smart hints" represent CodeEasy's initial foray into personalized learning. As a student, you seek answers to YOUR questions and hints for YOUR code, not generic responses. Just 1.5 years ago, this notion might have seemed like semi-science fiction. Yet, here we are, with AI fixing our code. It's exciting to ponder what the future holds!