We speak with Professor Marek Kowalkiewicz, Head of the Digital Economy Chair at Queensland University of Technology and author of the bestselling book “The Economy of Algorithms: AI and the Rise of the Digital Minions.” Professor Kowalkiewicz will be a guest at the Masters&Robots 2024 conference, where he will deliver a lecture titled “Reshaping Work and Leadership in the Economy of Algorithms”.
Digital University: What exactly is the economy of algorithms that, as you write in your publications, is changing and shaping our lives?
Marek Kowalkiewicz: I refer to the economy of algorithms as a new economic paradigm in which algorithms and artificial intelligence systems become key participants in economic processes. They are taking over more and more tasks traditionally performed by humans, such as decision-making, data analysis, and customer interactions. This is already happening, for instance, on e-commerce platforms where bots sell products, although not everyone realizes it. I recall a situation where two bots on Amazon kept raising the price of a book to absurd levels because no one had anticipated such interaction between them.
Is this an example of algorithms that ‘escape’ our understanding?
Algorithms have ‘escaped’ our understanding mainly due to the development of advanced machine-learning systems capable of self-learning and evolution. Modern algorithms often create other algorithms, making decision-making so complex that even their creators cannot always fully explain why specific decisions were made. This raises challenges related to transparency and responsibility for AI systems’ actions.
Do we need to keep an eye on artificial intelligence?
When we talk about artificial intelligence or advanced algorithms, we often think they are infallible. However, reality is more complex and sometimes amusing. For example, robotic vacuum cleaners meant to clean when we’re not home often encounter problems: they suck up charger cables, get stuck under furniture, or scatter debris.
It’s the same with many advanced technologies: they perform well under ideal conditions, but our world is complicated. Whether it’s a vacuum cleaner, a driverless car, or another system, they work excellent about 80% of the time, but disaster can strike in the remaining 20%. As humans, we have an essential role in the future—we may not vacuum the house or drive the car ourselves, but we’ll need to be ready to help algorithms and robots when they encounter difficulties. This problem will not be solved in the next two or five years. There will always be something that requires human intervention. In my book “AI and the Rise of the Digital Minions,” I write that we become like Gru from the movie—making sure our digital Minions don’t create chaos.
But how should we set boundaries for this technology? When should we hand over control of our decisions to algorithms, and when should we maintain full supervision over them?
There’s no definitive answer because much depends on individual preferences. Some will gladly delegate everyday tasks like shopping to automated systems. Others, more traditional or conscious consumers, will prefer to make these decisions themselves. How far we go also depends on the type of decision—from simple ones like subscribing to everyday products to more complex ones like choosing a school for our children. Hopefully, we’ll retain the ability to select and won’t be forced to rely entirely on algorithms. It’s vital that we have control over how technology influences our lives.
What potential threats arise from excessive reliance on algorithms in daily life and work?
First and foremost, we risk losing privacy, as our behaviors can be constantly monitored. Algorithms can also manipulate our preferences and decisions, influencing consumer choices. Currently, people decide how they use technology, but we already see examples where our options are limited by corporate decisions. For instance, Amazon Go stores don’t accept cash payments, not due to technological limitations but because of company policy. Similar situations may occur in the future, such as with “smart fridges” that can order food only from specific sources. In Europe, such practices might be limited by regulations, but restrictions may be less stringent in other regions like the United States or Australia. This shows that while AI itself won’t take control, decisions made by tech companies can significantly impact our freedom of choice.
Another issue is the potential for algorithms to reinforce social inequalities, leading to discrimination on various levels. Automation can also contribute to job losses. Despite these threats, it’s important to remember that algorithms can bring many positive effects.
What are the most significant benefits?
Digital technologies and artificial intelligence are among the best things that could have happened to humanity. Thanks to them, we can detect cancer earlier, speed up medical processes, and use less energy in logistics. We can also produce fewer materials while achieving better results. For example, today’s smartphones, which weigh about 100 grams, replace devices that 20 years ago would have weighed half a ton!
Understanding the potential threats and benefits of applying these technologies is crucial. Business leaders must approach these challenges consciously, analyzing how technology will impact their organizations, company culture, and customer relationships. They should particularly pay attention to the ethical aspects of using algorithms, such as trust issues with generative artificial intelligence and the impact on employment and the workplace.
We also observe a growing number of “algorithmic customers.” Companies must learn to navigate relationships with algorithms that increasingly make decisions on behalf of people. How do you convince a “smart fridge” to buy your product? Should humans and bots see the same products when shopping online, or should some products be exclusively for humans? Therefore, leaders’ roles involve not only adapting to these changes but also actively shaping the future to maximize benefits and minimize risks associated with digital technologies.
What do we need most in this context—as leaders and as ordinary technology users? Is digital education from an early age key, or do we need strong regulations? Or are both elements equally important?
Regulations are important but, by nature, always come with delays, reacting to emerging problems rather than proactively predicting them. Governments usually respond to specific crises, such as children’s use of social media. Therefore, we shouldn’t place too much hope solely in regulations.
Developing digital literacy and AI literacy is much more significant. This should become a priority to enable people to understand how these technologies work and how they can affect our lives. We need widespread education that will reach billions worldwide, teaching them how to use new technologies safely and consciously.
This is especially crucial in the case of generative artificial intelligence, which operates differently from traditional computer systems. People need to understand that the results generated by these systems aren’t always reliable and require verification. It’s important to educate society about the possibility of AI “hallucinations,” meaning the generation of information that isn’t necessarily true, which can have serious consequences.
We should also critically approach interactions with technologies, pondering what happens to our data and why. When more people start asking such questions, it can influence the market and force companies to introduce more transparent and ethical solutions. Education and awareness are key elements that can help us fully benefit from technological advancements while minimizing risks. Artificial intelligence is technology—not an omnipotent entity capable of everything, including taking over the world.
Do you believe such dynamic technological development will lead us to choose interactions with machines over relationships with people, or will human relationships prevail over those with algorithms and artificial intelligence?
As a scientist, I have to say it’s complicated. For example, war veterans suffering from PTSD often prefer conversations with chatbots over humans. Chatbots don’t judge, don’t remember, yet allow individuals to express their emotions. In such situations, interactions with machines can be more beneficial for people who fear judgment.
Does this mean we’ll always choose machines? No, but there are cases where contact with AI is preferred. I also observe this among students, who increasingly avoid direct interactions, opting instead for digital communication. This may result from a lack of experience in establishing personal relationships or simply convenience.
People will strive to maintain close relationships with other people, at least within a limited circle of close friends. However, broader social circles may shrink, especially as technologies increasingly draw us away from direct human contacts. It’s worth remembering that many technologies are designed to addict us. Social media and platforms based on generative AI, such as chatbots, are created to hold our attention for as long as possible. It’s important to be aware of this and make conscious decisions that allow us to maintain balance in our lives. And that’s what I wish for all of us.