AI Trends to Watch in 2024
The future came walking down the catwalk at the recent Paris Fashion Week, gracing the designer outfits of fashion brand Coperni in the form of an unassuming square. Called the AI Pin by its inventor Humane AI, the gadget was attached to the lapels of models presenting Coperni’s newest collection, giving the audience a sneak preview of a world in which artficial intelligence might power much of our communication.
Smart accessory: A Coperni model displays the AI Pin at Paris Fashion Week. The square-shaped device houses cutting-edge technology and comes to life with the help of artificial intelligence.
The AI Pin is intended to be an “intelligent clothing-based wearable device”, which uses “a range of sensors that enable contextual and ambient compute interactions”, according to Humane AI, a secretive startup founded by former Apple executives Imran Chaudhri and Bethany Bongiorno. In a TED talk, Chaudhri described his vision of “the disappearing computer” and demonstrated how AI could help translate phone calls in real time, for example, or beam text messages into the palm of your hand.
Backed by $100 million in venture funding, Humane AI aims to do nothing less than to “redefine our relationship to technology” and “allow people to bring AI with them everywhere”, according to Chaudhri.
The Buzz Around AI
The AI pin shows how far the technology has come. For decades, the concept of an artificial intelligence was little more than a dream of scientists who aimed to build thinking machines. Each success in the field saw numerous setbacks – until suddenly, AI arrived in the mainstream.
When OpenAI released ChatGPT at the end of November 2022, the digital world witnessed the fastest product launch in the age of the Internet. Within two months, the chatbot counted more than 100 million active users, reaching this landmark number five months faster than previous record holder TikTok.
“I think this is going to be the most useful, impactful, beneficial, amazing tool that humanity has yet created”, OpenAI CEO Sam Altman said at an event co-hosted by DLD and TU Munich. “I used to say the computer was that, and then we didn’t have anything that surpassed it for a while, and I’m pretty confident that this is going to surpass it.”
The overwhelming success has made 2023, arguably, the year of AI – particularly of large language models (LLMs) which are able to write, paint or produce music and videos based on countless examples in their training data.
“We now have more than one billion end users that actively use generative AI products”, Ludwig Ensthaler of 468 Capital observed at the recent DLD AI Summit.
Many people now prefer to chat with their search engine, rather than follow links. Others ask Stable Diffusion or DALL-E to turn text prompts into images, automatically generated in seconds. And most office workers will get an AI assistant as Microsoft (OpenAI’s biggest investor) is rolling out its Copilot feature across various products, including Windows and Teams.
Open excitement: Venture capital investors put almost $18 billion into companies related to generative AI in the first nine months of 2023 alone, according to Dealroom.co, a research firm.
Clearly we’re seeing the beginning of an exciting, immensely powerful technology – which brings up a number of important questions: Where does AI promise to bring the biggest business benefits? What should be improved? Where can it do harm? How should AI be regulated? And who’s in control?
AI Means Business
Efficiency gains and cost reduction are at the top of any manager’s wish list, and AI promises to deliver both. One example are administrative tasks that can be streamlined with the help of custom-made chatbots.
“It’s really the first technology that applies to anything that you want to do, creates productivity increases in pretty much anything that you want to do”, investor Hermann Hauser says. “And if you know a little bit about economics, productivity is the one magic parameter that makes us all better off without having to pay more.”
Some companies already see this effect pay off. “We have an application called AskHR that does a lot of internal human resources tasks for you”, Ana Paula Assis, General Manager of IBM Europe, told the audience at the DLD AI Summit. “In the past three years or so, we’ve managed to save more than 12,000 hours of work by automating functions and by making the access to information much easier for the employees.”
SAP already counts more than 24,000 customers using AI, according to Board Member Thomas Saueressig. He expects to see that “basically every employee has a trusted best buddy at their side”, in the form of an AI. “You can ask any business-related question, which is not defined before, and you get an answer to that.”
In media and marketing, the prospect of automating content creation could become a game changer. Take Creatopy, a startup led by former Google executive Dan Oros. Thanks to the new superpowers of AI, Oros promises, companies can build, and customize, advertising campaigns with a few clicks.
“Let’s say you want to create ads. You just click ‘generate ad’, it scrolls through your website, looks at your logo, your photos and everything is generated automatically”, Oros explained in conversation with Burda manager Stefan Atanassov. The AI can also help designers and writers adjust their work to different audiences, help translate and generally speed up their workflow – but it’s not meant to replace humans, Oros pointed out. (Watch the video for more.)
In journalism, however, a flood of AI-generated content could make it much harder for quality products to stand out and find an audience.
“If we have a great product and we have the trust of consumers, there will be a business – I’m not worried about that”, Martin Weiss, CEO of Hubert Burda Media (DLD’s parent company) says. “What I’m worried about is, will we get the eyeballs? Will we keep the trust and will we be able to create trust for new products in an age of AI?”
Which AI Can You Trust?
The issue of trust is key, in both business and society. Current systems tend to make up information because they merely compute the likelihood of words belonging together. Scientists often speak of “hallucinations” when AI systems become overly creative.
In many ways, generative AI systems resemble parrots that have learned to mimic humans without truly understanding the world they’re operating in, Jonas Andrulis, CEO and founder of Aleph Alpha argues.
Humans, therefore, must remain in control, he says. “We have to take responsibility – and in order to do that, we have to understand where these results come from.”
With most AI systems resembling a black box, a new approach is needed that brings transparency and explainability, Andrulis demands. Aleph Alpha’s solution is to highlight statistical patterns that lead the AI to come to certain conclusions.
“We visualize these patterns”, Andrulis explains. “And once you’ve seen these patterns, you can check for yourself whether they’re accurate or not.”
A Sense of Reasoning
Another approach is to empower algorithms to go beyond big data. “Making the foundation of the calculation a reason chain rather than just statistics” would give AI a basic of common sense, Hermann Hauser explains.
Unlikely AI – a company backed by Hauser’s Amadeus Capital – is working on this approach. The basic idea is to describe the human world to machines in such a way that the data “represents meaning rather than just a database node”, Hauser says.
This could not just reduce the error rate but allow for fact-checking in real time , he believes – turning the AI into an “evidence engine”, as Hauser calls it. “I have great hopes that these real-time fact-checkers will bring in the extremes of our polarized society. Because the reason these extremes exist is that at the extreme end, both on the left and on the right, rhetoric wins over any factual arguments.”
Who’s in Control?
The race is on, and a stupendous prize awaits. From 2023 to 2030, the global artificial intelligence market is expected to grow from currently $150 billion to more than $1.3 trillion per year. That would equal an annual growth rate of almost 37 percent.
No wonder tech giants are scrambling to secure a big share of the pie for themselves. But this race for market share is about more than money. If AI becomes as critical to our common future as most experts expect, whoever controls the algorithms will also have great influence on business outcomes and society.
“Agency and accountability” should become basic principles of the age of AI, Mark Surman, President of the Mozilla Foundation, argues. “Agency meaning at an individual level that I can shape what the AI does. I can understand it”, he told interviewer Andrian Kreye at the DLD AI Summit. Accountability means “you can see what happened” and identify those responsible if something should go wrong.
“Open source at its core principles is actually really good at both of those things”, Surman said, making the community a counterweight to big tech. Solving the black box problem to create trustworthy AI was a good example, he said. “Do we want to only solve it by a few companies behind closed doors? Or do we want to solve it collectively, where as we learn how to see, we can all see?”
Smart Regulation
Policymakers find themselves in a different, but equally important race against time. There’s a general consensus that a technology as powerful as AI needs guardrails – a set of ground rules that prevents society from harm.
Given the long list of risks, including large-scale misinformation, new lethal weapons systems and machines outsmarting humans, even AI proponents are asking for such guardrails.
“I think regulation is really good for a technology like this”, OpenAI’s Sam Altman said when he visited Munich. “I generally think it is better to wait and see what’s going to happen and then regulate responsively. But there are times – and I think this is one of them – where you do want to be proactive.”
That’s what the European Union is trying to do with its proposed EU AI Act, which would ban the use of artificial intelligence for biometric surveillance, emotion recognition or predictive policing, for example. The current draft also requires that “generative AI systems like ChatGPT must disclose that content was AI-generated”.
Eva Maydell, a member of the European Parliament involved in drafting the law, cautions that the EU needs to find a balance between regulating risk and preserving the freedom to innovate.
“The EU is lagging behind when it comes to innovation”, Maydell noted in her AI Summit keynote. “We do have the brilliant researchers, the creativity, the talent, but we are often not the place where companies decide to scale up or to bring their products to a broader market.”
One way to change that would be to establish “regulatory sandboxes”, she suggested, “where companies will have more freedom to experiment with some temporary flexibility on the AI Act rules”. In addition, Maydell emphasized, Europe should work closely with its democratic allies to establish common rules, “based on our common understanding of the world, on our morals, on our ideas about freedom, about human dignity, about democracy.”
The Road Ahead
Finding the right approach to legislating AI is part of an even bigger challenge: future-proofing society as a whole – and getting people to embrace the disruptive technology, rather than rejecting it.
“We should be open to change and not – which seems to be often a standard reaction – raise doubts and concerns”, renowned economist Moritz Schularick argues. “There is no future, and no way to solve the problems ahead of us, by rejecting technology.”
At the same time, he points out, people’s fears must be taken seriously, and it’s imperative that the economic gains that AI promises to bring will benefit a majority of the people, not just a few.
“We know that we have to manage the process of technology adoption very well”, Schularick says. “It can disrupt societies, it can lead to large inequalities. It can lead to resistance – I think we’ve seen some of this with the anti-globalization movement in recent years.”
Researchers, meanwhile, are trying to make AI systems even more powerful and, at the same time, more energy efficient. “What we’ve seen so far is making tasks ever more complex”, computer scientist Björn Ommer (LMU) explains, mostly by making large language models even larger, scaling them up and building in more parameters.
This makes current AI systems enormously energy hungry. A new study comes to the conclusion that 1.5 million servers powered by Nvidia chips would consume at least 85 terawatt-hours of electricity per year, “more than what many small countries use in a year”, the magazine Scientific American reports.
In addition, bigger systems require so much computing power that fewer and fewer companies – much less universities – will be able to afford them, Ommer worries.
“And that means we have fewer key players, less diversity, less creativity”, he says. “And we would potentially run into a monopoly of a technology that I see critical for society and for a future sort of development there, let alone other questions such as data privacy.”
The solution, to him is clear: AI systems must become more powerful by using fewer resources – utilizing smaller data models, for example, “that you can run on your hardware which you have available”. Ommer’s own team has managed to make its popular image generator Stable Diffusion efficient enough to run on Android phones.
“I’m excited to see generative AI change the way that we interact with computers”, Ommer says. Even for him as a computer scientist it’s frustrating how the machines require many clicks and mouse miles to do what we want them to do.
“Having a natural language interface will enable so much more productivity, reduce frustration and make us, as users of this machine, so much more powerful”, Ommer believes. “It’s hard to overestimate the enormous potential that this will have, and to see all the applications that this will actually create.”