مشاركات عشوائية

From image generators to language models, 2023 will be the year of AI

featured image

A few years ago, I sometimes found myself having to answer the question, “Why does Future Perfect, which is supposed to focus on the world’s most critical issues, write so much about AI?”

After 2022, however, I don’t often have to answer that one anymore. This is the year AI went from a niche topic to a mainstream topic.

In 2022, powerful image generators like Stable Diffusion have made this clear that the design and art industry was threatened by massive automation, artists to demand answers – which means the details of how modern machine learning systems learn and are trained have become common questions.

Meta pushed versions of both Blenderbot (that was a flop) and agent Cicero, world conqueror and deceiver (who was not).

OpenAI ended the year on a high note with the release of ChatGPT, the first AI language model to be widely adopted by millions of users — and one who could announce the end of the university essayamong other potential implications.

And more is to come – many more. On December 31, OpenAI President and Co-Founder Greg Brockman tweeted the following: “Prediction: 2023 will make 2022 a sleepy year for AI advancement and adoption.”

AI goes from hype to reality

One of the defining characteristics of advances in AI over the past few years is that it happened very, very quickly. Machine learning researchers often rely on benchmarks to compare models against each other and define the state of the art on a specific task. But often in today’s AI, a benchmark will barely be created before a model is released that obviates it.

When GPT-2 was released, a lot of work went into characterizing his limitsmost of which had disappeared in GPT-3. Similar work happened for GPT-3, and ChatGPT has for the most part already exceeded those constraints. ChatGPT, of course, has its own limitsmany of which are the product of reinforcement learning on human feedback, which has been used to refine it to say less objectionable things.

But I would caution people against inferring too much from these limitations; GPT-4 is Looks like he’s going to be released this winter or this spring, and obviously it’s even better.

Some artists have reassured themselves that current art models are very limited, but others have warned (correctly, I think) that the the next generation of models will not be limited in the same way.

And while art and text were the big leaps forward in 2022, there are plenty of other areas where machine learning techniques could be poised to revolutionize the industry: musical composition, animated video, write code, Translation.

It’s hard to guess which dominoes will fall first, but by the end of this year, I don’t think artists will be alone in battling the sudden automation of their industry.

What to look for in 2023

I think it is healthy for experts to make concrete rather than vague predictions; that way you, the reader, can hold us accountable for our accuracy. So here are some details.

In 2023, I think we will have image models that can represent several characters or objects and still do more complicated modeling of object interactions (a weakness of current systems). I doubt they’re perfect, but I suspect most of the complaints about the limitations of current systems will no longer apply.

I think we will have text generators that will give better answers than ChatGPT (judging by human reviewers) to almost any question you ask them. This may already be the case — this week, Information reported that Microsoft, which has a $1 billion stake in OpenAI, plans to integrate ChatGPT into its beleaguered Bing search engine. Instead of providing links in response to search queries, a language model-based search engine could simply answer questions.

I think we’ll see a much more widespread adoption of coding assistant tools like Copilot, to the point that more than one in 10 software engineers say they use them regularly. (I wouldn’t be surprised if half of software engineers usually use such tools, but that would depend on the final cost of the systems.)

I think the AI ​​personal assistant and AI “friend” space will take off, with at least three options for such uses that are significantly better for the user experience in direct comparisons than models like Siri or Alexa that exist today.

Greg Brockman knows a lot more about what OpenAI has under the hood than I do, and I think he also expects faster progress than me, so maybe all of the above is actually too conservative ! But these are some concrete ways that I think you can expect AI to change the world in the coming year – and these changes are not minor.

“Yuck”

Elon Musk responded to Brockman’s tweet about the outlook for AI in 2023 with one word: “Yikes.”

There are lots of history herebut I’ll try to give you a quick overview: Musk read about the enormous potential and risks of AI in 2014 and 2015 and became convinced that it was one of the greatest challenges of our time:

With artificial intelligence, we invoke the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he sure can control the demon? It does not work.

Along with other Silicon Valley luminaries like Y Combinator’s Sam Altman, Musk co-founded OpenAI in 2015, ostensibly to ensure that the development of AI would benefit all of humanity. It’s a complicated mission, to say the least, as the best way to get the AI ​​to work depends heavily on what exactly you expect. Musk said he fears the centralization of power under the technological elites; others worry tech elites will lose control of their own creation.

Although the musk deceased OpenAI in 2019, he kept warning about AI, including AIs that the company he helped found build and liberate in the world.

I rarely find common ground with Elon Musk. But that “yuck” is also part of how I felt reading Brockman’s prediction. The warnings from AI experts who we create god used to be easily seen as hype; they are no longer so easy to eliminate.

I am very proud of myself prediction report, but I’d like to be wrong about that. I think a slow, sleepy year on the AI ​​front would be good news for humanity. We would have some time to adapt to the difficult poses of the AI, study the models we have and learn how they work and how they break.

We would be able to make progress on the challenge of understanding the goals of AI systems and predicting their behavior. And with the hype cooling, maybe we’ll have time to have a more serious conversation about why AI is so important and how we – a human civilization with a common interest in this problem – can make sure everything goes well.

That’s what I would like to see. But the easiest way to go wrong with predictions is to predict what you want to see instead of seeing where you see the indicated incentives and technological developments. And AI incentives do not indicate a sleepy year.

A version of this story originally appeared in the Future Perfect newsletter. Sign up here to subscribe!

Post a Comment

0 Comments