관련피드
[AI마켓] 엔트로픽이 기대하는 2026년 AI 마켓
[메타AI] 2025년 메타가 제시하는 AI 미래
목차
1. "AI의 도움 없이 이렇게 글을 쓰는 것이 마지막일지도"
2. 초지능 시대의 도래는 언제 인가 - 전문 중 주요 부분 번역
3. 영어 전문
"AI의 도움 없이 이렇게 글을 쓰는 것이 마지막일지도"
6월 10일, 샘 알트만이 그의 블로그에 인공지능의 미래 전망에 대한 글을 써서 화제가 되고 있습니다. 2020에서 2030까지 과거와 현재를 정리하고 미래에 대한 전망이지요. 오픈 AI에 중요한 변수가 있을 때 블로그에 글을 쓰는 그가 지금 이 시점에 글을 썼다는 것은 구글 제미나이, 엔트로픽 클로드 등의 발표 이후 오픈 AI의 방향성을 제시하고자 하는 글이 아닐까 사람들은 생각합니다. 샘 알트만은 이 글이 AI의 도움 없이 쓰는 마지막 글이 될지도 모른다는 의미심장한 말을 남기며 글을 시작하는데요. 처음에는 그만큼 향상된 챗GPT의 성능에 대해 언급하는 건가 했었는데, 글을 읽으면서 드는 생각은 이거 이미 GPT가 쓴 거 아닐까 하는 생각이 들었습니다. 샘 알트만의 글을 많이 읽어보지 않아서 예전과 비교할 순 없지만 글의 표현력이나 단어 선택이 왠지 AI 같았거든요. 그럴싸 하지만 경험과 감정 없는 그런 글!!!!
I realized it may be the last one like this i write with no AI help at all.
AI의 도움 없이 이렇게 글을 쓰는 것이 마지막일지도 모른다는 것을 깨달았습니다.
- 샘 알트만
초지능 시대의 도래는 언제 인가 - 전문 중 주요 부분 번역
2025년에는 실제 인지 작업을 수행할 수 있는 에이전트가 등장했으며, 컴퓨터 코드 작성은 결코 예전과 같지 않을 것입니다. 2026년에는 새로운 인사이트를 파악할 수 있는 시스템이 등장할 가능성이 높습니다. 2027년에는 현실 세계에서 작업을 수행할 수 있는 로봇이 등장할 수도 있습니다.
훨씬 더 많은 사람들이 소프트웨어와 예술을 만들 수 있을 것입니다. 하지만 세상은 두 가지 모두를 훨씬 더 원하며, 전문가들이 새로운 도구를 받아들이는 한 초보자보다 훨씬 더 잘할 것입니다. 일반적으로 2030년에는 한 사람이 2020년보다 훨씬 더 많은 일을 할 수 있는 능력이 눈에 띄는 변화가 될 것이며, 많은 사람들이 혜택을 받을 수 있는 방법을 찾아낼 것입니다.
가장 중요한 점은 2030년대도 크게 다르지 않을 수 있습니다. 사람들은 여전히 가족을 사랑하고, 창의력을 표현하고, 게임을 하고, 호수에서 수영을 할 것입니다.
하지만 여전히 매우 중요한 방식으로 보면 2030년대는 그 어느 때보다 크게 달라질 것입니다. 우리가 인간 수준의 지능을 얼마나 뛰어넘을 수 있을지는 모르지만, 곧 알게 될 것입니다.
2030년대에는 지능과 에너지, 즉 아이디어와 아이디어를 실현할 수 있는 능력이 엄청나게 풍부해질 것입니다. 이 두 가지는 오랫동안 인간 발전의 근본적인 제약 요인이었으며, 풍부한 지능과 에너지(그리고 좋은 거버넌스)가 있다면 이론적으로 다른 모든 것을 가질 수 있습니다
이미 우리는 놀라운 디지털 지능을 가지고 살고 있으며, 초기 충격을 받은 후에는 대부분의 사람들이 이에 익숙해져 있습니다. 아주 빠르게 AI가 아름답게 쓰여진 문단을 만들 수 있다는 놀라움에서 언제 아름답게 쓰인 소설을 만들 수 있을지 궁금해하거나, 생명을 구하는 의학 진단을 내릴 수 있다는 놀라움에서 언제 치료법을 개발할 수 있을지 궁금해하거나, 작은 컴퓨터 프로그램을 만들 수 있다는 놀라움에서 언제 완전히 새로운 회사를 만들 수 있을지 궁금해집니다. 경이로움은 일상이 되고, 그다음에는 테이블이 걸려 있습니다.
******
큰 변화와 함께 직면해야 할 심각한 도전 과제가 있습니다. 기술적, 사회적으로 안전 문제를 해결해야 하지만 경제적 영향을 고려할 때 초지능에 대한 접근성을 널리 보급하는 것이 매우 중요합니다. 앞으로의 최선의 길은 다음과 같을 수 있습니다:
문제를 해결하세요. 이는 AI 시스템이 장기적으로 우리가 집단적으로 진정으로 원하는 것을 학습하고 행동하도록 보장할 수 있다는 것을 의미합니다(소셜 미디어 피드는 정렬되지 않은 AI의 한 예이며, 이러한 알고리즘을 통해 단기적인 선호도를 계속 스크롤하고 명확하게 이해할 수 있지만, 장기적인 선호도를 무시하는 뇌의 무언가를 악용함으로써 그렇게 할 수 있습니다).
그런 다음 슈퍼 인텔리전스를 저렴하고 널리 사용할 수 있으며 어떤 사람, 회사 또는 국가에도 너무 집중하지 않도록 하는 데 집중하세요. 사회는 회복력이 뛰어나고 창의적이며 빠르게 적응합니다. 사람들의 집단적 의지와 지혜를 활용할 수 있다면 많은 실수를 저지르고 어떤 일은 정말 잘못될 수 있지만 빠르게 배우고 적응하여 이 기술을 사용하여 최대한의 상승과 하락을 최소화할 수 있을 것입니다. 사회가 결정해야 하는 넓은 범위 내에서 사용자에게 많은 자유를 제공하는 것은 매우 중요해 보입니다. 세상은 이러한 광범위한 경계가 무엇인지, 집단적 정렬을 어떻게 정의하는지에 대한 대화를 일찍 시작할수록 좋습니다.
******
OpenAI는 지금 많은 일을 하고 있지만 무엇보다도 초지능 연구 회사입니다. 우리 앞에는 많은 일이 있지만 지금은 눈앞의 대부분의 길이 밝혀지고 어두운 곳은 빠르게 멀어지고 있습니다. 우리는 우리가 하는 일을 할 수 있게 되어 매우 감사하게 생각합니다.
측정하기에는 너무 저렴한 지능은 충분히 이해할 수 있는 수준입니다. 말도 안 되는 소리처럼 들릴 수 있지만, 2020년에 오늘날 우리가 있을 것이라고 말한다면 2030년에 대한 현재의 예측보다 더 미친 소리처럼 들릴 것입니다.
초지능을 통해 원활하고 기하급수적이며 사건 없이 확장할 수 있기를 바랍니다.
영어 전문
The Gentle Singularity
We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be.
Robots are not yet walking the streets, nor are most of us talking to AI all day. People still die of disease, we still can’t easily go to space, and there is a lot about the universe we don’t understand.
And yet, we have recently built systems that are smarter than people in many ways, and are able to significantly amplify the output of people using them. The least-likely part of the work is behind us; the scientific insights that got us to systems like GPT-4 and o3 were hard-won, but will take us very far.
AI will contribute to the world in many ways, but the gains to quality of life from AI driving faster scientific progress and increased productivity will be enormous; the future can be vastly better than the present. Scientific progress is the biggest driver of overall progress; it’s hugely exciting to think about how much more we could have.
In some big sense, ChatGPT is already more powerful than any human who has ever lived. Hundreds of millions of people rely on it every day and for increasingly important tasks; a small new capability can create a hugely positive impact; a small misalignment multiplied by hundreds of millions of people can cause a great deal of negative impact.
2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world.
A lot more people will be able to create software, and art. But the world wants a lot more of both, and experts will probably still be much better than novices, as long as they embrace the new tools. Generally speaking, the ability for one person to get much more done in 2030 than they could in 2020 will be a striking change, and one many people will figure out how to benefit from.
In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes.
But in still-very-important-ways, the 2030s are likely going to be wildly different from any time that has come before. We do not know how far beyond human-level intelligence we can go, but we are about to find out.
In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant. These two have been the fundamental limiters on human progress for a long time; with abundant intelligence and energy (and good governance), we can theoretically have anything else.
Already we live with incredible digital intelligence, and after some initial shock, most of us are pretty used to it. Very quickly we go from being amazed that AI can generate a beautifully-written paragraph to wondering when it can generate a beautifully-written novel; or from being amazed that it can make live-saving medical diagnoses to wondering when it can develop the cures; or from being amazed it can create a small computer program to wondering when it can create an entire new company. This is how the singularity goes: wonders become routine, and then table stakes.
We already hear from scientists that they are two or three times more productive than they were before AI. Advanced AI is interesting for many reasons, but perhaps nothing is quite as significant as the fact that we can use it to do faster AI research. We may be able to discover new computing substrates, better algorithms, and who knows what else. If we can do a decade’s worth of research in a year, or a month, then the rate of progress will obviously be quite different.
From here on, the tools we have already built will help us find further scientific insights and aid us in creating better AI systems. Of course this isn’t the same thing as an AI system completely autonomously updating its own code, but nevertheless this is a larval version of recursive self-improvement.
There are other self-reinforcing loops at play. The economic value creation has started a flywheel of compounding infrastructure buildout to run these increasingly-powerful AI systems. And robots that can build other robots (and in some sense, datacenters that can build other datacenters) aren’t that far off.
If we have to make the first million humanoid robots the old-fashioned way, but then they can operate the entire supply chain—digging and refining minerals, driving trucks, running factories, etc.—to build more robots, which can build more chip fabrication facilities, data centers, etc, then the rate of progress will obviously be quite different.
As datacenter production gets automated, the cost of intelligence should eventually converge to near the cost of electricity. (People are often curious about how much energy a ChatGPT query uses; the average query uses about 0.34 watt-hours, about what an oven would use in a little over one second, or a high-efficiency lightbulb would use in a couple of minutes. It also uses about 0.000085 gallons of water; roughly one fifteenth of a teaspoon.)
The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything. There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before. We probably won’t adopt a new social contract all at once, but when we look back in a few decades, the gradual changes will have amounted to something big.
If history is any guide, we will figure out new things to do and new things to want, and assimilate new tools quickly (job change after the industrial revolution is a good recent example). Expectations will go up, but capabilities will go up equally quickly, and we’ll all get better stuff. We will build ever-more-wonderful things for each other. People have a long-term important and curious advantage over AI: we are hard-wired to care about other people and what they think and do, and we don’t care very much about machines.
A subsistence farmer from a thousand years ago would look at what many of us do and say we have fake jobs, and think that we are just playing games to entertain ourselves since we have plenty of food and unimaginable luxuries. I hope we will look at the jobs a thousand years in the future and think they are very fake jobs, and I have no doubt they will feel incredibly important and satisfying to the people doing them.
The rate of new wonders being achieved will be immense. It’s hard to even imagine today what we will have discovered by 2035; maybe we will go from solving high-energy physics one year to beginning space colonization the next year; or from a major materials science breakthrough one year to true high-bandwidth brain-computer interfaces the next year. Many people will choose to live their lives in much the same way, but at least some people will probably decide to “plug in”.
Looking forward, this sounds hard to wrap our heads around. But probably living through it will feel impressive but manageable. From a relativistic perspective, the singularity happens bit by bit, and the merge happens slowly. We are climbing the long arc of exponential technological progress; it always looks vertical looking forward and flat going backwards, but it’s one smooth curve. (Think back to 2020, and what it would have sounded like to have something close to AGI by 2025, versus what the last 5 years have actually been like.)
There are serious challenges to confront along with the huge upsides. We do need to solve the safety issues, technically and societally, but then it’s critically important to widely distribute access to superintelligence given the economic implications. The best path forward might be something like:
- Solve the alignment problem, meaning that we can robustly guarantee that we get AI systems to learn and act towards what we collectively really want over the long-term (social media feeds are an example of misaligned AI; the algorithms that power those are incredible at getting you to keep scrolling and clearly understand your short-term preferences, but they do so by exploiting something in your brain that overrides your long-term preference).
- Then focus on making superintelligence cheap, widely available, and not too concentrated with any person, company, or country. Society is resilient, creative, and adapts quickly. If we can harness the collective will and wisdom of people, then although we’ll make plenty of mistakes and some things will go really wrong, we will learn and adapt quickly and be able to use this technology to get maximum upside and minimal downside. Giving users a lot of freedom, within broad bounds society has to decide on, seems very important. The sooner the world can start a conversation about what these broad bounds are and how we define collective alignment, the better.
We (the whole industry, not just OpenAI) are building a brain for the world. It will be extremely personalized and easy for everyone to use; we will be limited by good ideas. For a long time, technical people in the startup industry have made fun of “the idea guys”; people who had an idea and were looking for a team to build it. It now looks to me like they are about to have their day in the sun.
OpenAI is a lot of things now, but before anything else, we are a superintelligence research company. We have a lot of work in front of us, but most of the path in front of us is now lit, and the dark areas are receding fast. We feel extraordinarily grateful to get to do what we do.
Intelligence too cheap to meter is well within grasp. This may sound crazy to say, but if we told you back in 2020 we were going to be where we are today, it probably sounded more crazy than our current predictions about 2030.
May we scale smoothly, exponentially and uneventfully through superintelligence.

The Gentle Singularity
We are past the event horizon; the takeoff has started. Humanity is close to building digital superintelligence, and at least so far it’s much less weird than it seems like it should be. Robots...
blog.samaltman.com
'테크 뉴스 리뷰 > AI 뉴스' 카테고리의 다른 글
[AI인용] AI native (2) | 2025.06.15 |
---|---|
[메타AI] 2025년 메타의 결정 _ 알렉산더 왕 스케일AI CEO 영입 (4) | 2025.06.13 |
[애플AI] 2025년 애플 AI _애플 인텔리전스 (2) | 2025.06.11 |
[AIDTx] 잼잼 400 장애 아동 재활 치료 게임 (2) | 2025.06.09 |
[메타AI] 2025년 메타가 제시하는 AI 미래 (2) | 2025.06.06 |