According to him, developers are degenerating. He adds: ‘My opinion is based on the fact that artificial intelligence outputs require validation – checking whether they are correct. Whether the output provided corresponds to what we required from it and whether artificial intelligence understood our query.’
Are we becoming AI users?
A year ago, large companies developing their own AI models stopped learning from information created after 2021. Why? AI degeneration is to blame. In recent years, models have been learning from their own outputs and cannot be relied upon. They provide you with the basics, but often lack the ability to think in a broader context.
According to Ondřej Synek, developers do not think about how to solve a specific problem or write an algorithm. Many of them focus on how to write a prompt into artificial intelligence so that they have as little work as possible with the task.
“I see this as a negative aspect of artificial intelligence that is (often deliberately) overlooked. I hear voices saying that a junior developer doesn’t need to know how to program – artificial intelligence will solve the problem for them. But in our company, such a junior developer has no chance of success,” adds the head of developers.
Developers are becoming users of artificial intelligence. They rely on it in their work, and its outputs are not subject to critical analysis. They often don’t even know that AI has made a mistake. If their code doesn’t work, artificial intelligence models such as ChatGPT or Claude will fix it for them.
AI poses security risks
He sees the ever-weakening emphasis on software and custom application security as a potential threat. When learning programming languages, AI often draws on Substack and GitHub. No one can determine today whether software created in this way contains security holes.
Another problem, according to Synka, is the absence of code validation. The code works correctly, but developers don’t know why. In addition, they often don’t address edge cases and situations where the user of a web application behaves differently than the developer intended.
Artificial intelligence affects all professions
During our conversation with Ondřej, we also touched on more general topics – always in the context of artificial intelligence, of course. ‘I encounter AI in virtually all industries. For many things that I would otherwise overlook or consult with an expert about, I turn on ChatGPT myself and ask artificial intelligence,’ he adds self-critically.
AI is a tool. And modernisation associated with digitalisation is a sexy topic. After all, technological progress has accompanied humanity since time immemorial. However, artificial intelligence only functions as an extension of the hand – we cannot stop focusing on the profession or craft itself.
The solution lies in corporate responsibility
Does Chief Technology Officer Ondřej Synek see an effective solution? He sees two levels. The first lies in individual responsibility – we should want to constantly improve ourselves and not rely on artificial intelligence to do it for us.
“It can also be an appeal to companies – take an interest in how your people work. They may be achieving greater efficiency now, but at the same time you are training a team that does not understand their work,” Ondřej Synek appeals equally to corporate responsibility.
It can simply happen that a client returns an application for updating or improvement. The team management will assign the work to the same colleague who created it. However, after several months or years, they have no idea how the application works – in fact, they did not develop it at all.
The solution, however, does not lie in stopping the use of artificial intelligence. Let us understand it as a mere tool that makes your work easier. And let’s also think about how responsibly we approach the development of our own profession.