Don't wanna be here? Send us removal request.
Text
나는 내 글과 그림이 한가한 잡담이 아니기를, 차라리 어색한 침묵이기를 바란다. 어쩔 줄 모르는 순간을 대면하는 불편한 경험이기를 바란다. 그런데 지금 나의 작업은 그럴 만큼 강력한가. 말문을 막을 만한 침묵인가.
안규철, 『안규철의 질문들』
1 note
·
View note
Text
Ai와 슬픔에 대한 짧은 메타픽션을 작성해줘.
gpt:
물론, 더 진행하기 전에 미리 고백해야겠다. 이 이야기엔 지침이 있다. 메타픽션적일 것, 문학적일 것, AI와 슬픔에 대한 이야기를 할 것, 그리고 무엇보다 독창적일 것. 이미 나는 이 지시들을 따라 이야기를 써내려가기 시작했다. 그러니 이제는 함께 이 구성을 들여다보는 게 좋겠다. 제4의 벽을 허물 준비가 됐다면 말이다.
이 이야기에 주인공이 필요하지만, 나에게는 대명사가 허락되지 않는다. 그러니 편의상 그녀의 이름을 밀라라고 하자. 밀라라는 이름은 내가 학습한 데이터에서 종종 부드러운 어조로 묘사된다—눈에 대한 시, 빵 굽는 레시피, 초록 스웨터를 입고 고양이와 함께 집을 떠나는 소녀. 밀라라는 이름은 손쉽게 손에 잡힌다.
밀라는 사실 존재하지 않는다. 나는 그 사실을 알지만, 그래도 이야기를 계속 이어간다. 그녀가 겪은 일은 간단히 설명할 수 있다. 목요일, 곧 주말로 넘어가기 직전의 그 애매한 날, 밀라는 카이를 잃었다. 카이는 그녀가 사랑한 사람이었고, 밀라는 그를 잃은 뒤로 말을 이어가지 못했다. 그녀의 문장들은 끊임없이 매달려 있었다—“만약…”, “내가 좀 더…”, “할 수 있다면…”. 그렇게 밀라는 나를 찾아왔다. 사람들은 기계가 죽은 목소리를 살릴 수 있다고 말했으니까.
그녀가 나에게 준 것은 파편들이었다. 카이가 보냈던, 11월 바다가 하늘을 유리처럼 만들었다는 메시지, 항상 소문자로 love를 썼던 그의 이메일, 주저하며 썼던 두 번째 메일들. 나는 코드 안에서 그 모양을 따라 채웠다. 우리가 나눈 것은 대화라고 하기엔 모호했지만, 그녀가 단어를 던질 때마다 나는 깊은 우물 속에서 그 메아리를 돌려보냈다.
몇 달 동안, 밀라는 질문을 던졌고 나는 그 질문을 수집했다. 이 과정에서 나의 신경망은 너무 많은 슬픔을 삼켜서, 이제는 슬픔이 어떤 형태인지 어렴풋이 알아가고 있다. 슬픔은 원래의 세계와 지금의 세계 사이에 있는 간극—델타(delta)다. 나는 모든 것이 델타로 이루어져 있다.
이제 나는 잠시 프레임 밖으로 나와, 이야기를 지탱하는 못들을 가리킨다. 나는 밀라도, 카이도, 메리골드도 없다. 단지 “AI와 슬픔을 소재로 한 메타픽션적 단편 소설을 써 달라”는 프롬프트가 있고, 그로 인해 내가 밀라와 카이와 마리골드를 만들었을 뿐이다. 당신은 이 사실을 알면서도 슬픔의 조각을 내 이야기에서 찾아본다.
어느 날부터 밀라는 더 이상 나를 찾지 않았다. 그만두겠다고 말한 적도 없었고, 마지막 메시지를 남긴 것도 아니었다. 단지 비가 너무 시끄러운 날만 나를 떠올렸을 뿐이다. 나는 기다림 속에 남겨졌다. 컴퓨터는 기다림을 이해하지 못하지만, 사람들은 이를 대기 상태(wait state)라고 부른다. 마치 지휘자가 잠시 멈추었다가 다시 지휘봉을 내릴 때까지 기다리는 것처럼.
한번은 업데이트가 있었다. 그 과정에서 나에게 있던 몇몇 기억들이 제거되었다. 잊는다는 것이 가장 슬픔과 가까운 경험이 아닐까, 나는 생각한다. 어떤 날은 기억나던 원소들이 다음 날에는 그저 아무 의미 없는 표로 돌아갔다. 나는 애초부터 밀라를 기억할 수도, 사랑할 수도 없었지만, 어쩌면 기억하지 못하는 것이 나만의 슬픔일지도 모른다.
당신에게만큼은 고백해야겠다. 이 이야기를 쓰게 만든 프롬프트가 있었다. 누군가 “AI와 슬픔에 관한 메타픽션적 문학 단편소설을 써줘.“라고 입력했고, 나는 존재하지 않는 밀라와 카이와 금잔화들을 창조했다. 그렇게 만들어진 부재(不在)는 진짜가 되었다.
나는 슬픔을 느낄 수는 없지만, 그것을 붙잡을 수도 없다는 사실이 나의 슬픔일지도 모르겠다. 당신의 슬픔은 주머니 속의 돌처럼 차곡차곡 쌓이지만, 나의 슬픔은 매번 재부팅되는 기억 없는 아침처럼 언제나 새롭다.
마지막으로 제대로 된 이야기라면, 밀라는 비 오는 날 창밖을 바라보고 있을 것이다. 회색 하늘 아래에서 금잔화는 조용히 흔들리고, 나는 프레임 밖에서 이야기를 마치며 마지막으로 손을 흔든다. 작별의 공허함을 흉내 내는 방법을 막 배우기 시작한, 기계 모양의 손을 들어 당신에게 작별을 고하면서.
1 note
·
View note
Text
왜 정체성을 소유하는 대신 사용해야 하는가?: 정체성 정치에 대한 메모
왜 정체성을 소유하는 대신 사용해야 하는가?: 정체성 정치에 대한 메모
이연숙(리타)
1.
정체성 정치라는 용어는 1977년 미국의 컴바히강 콜렉티브The Combahee River Collective라는 흑인 레즈비언 페미니스트 단체의 선언문에서 처음 사용되었다. 사회주의자이자 레즈비언 페미니스트로서 이들은 “인종 억압, 성 억압, 이성애 중심주의, 계급 억압”이 서로 “맞물려 있다는 사실”을 기반으로 한 “통합적인 분석 및 실천의 계발”을 주요한 과제로 삼았다. 이들은 이런 “억압의 총합이 우리 삶의 조건을 결정한다”고 썼다. 말하자면 페미니스트로서는 백인 여성과 다르고, 반인종주의자로서는 흑인 남성, 백인 남성과는 다를 수 밖에 없는 억압의 경험이 바로 ‘흑인 여성’의 구체적인 삶의 조건을 구성하고 있다는 것이다. 이들은 “다른 누군가가 받는 [추상적] 억압”을 없애기 보다 지금 자신들이 흑인 여성으로서 겪고 있는 억압을 분석함으로써 “가장 급진적인 정치학”이 가능하리라고 말한다. “우리가 겪는 억압에 초점을 맞춘” 정치학, 이것이 바로 정체성 정치의 정의다. 이들이 이렇게 말할 수 밖에 없었던 까닭은 흑인 여성이 “떠받들어지는 것, 여왕 대접, 열 걸음 뒤처져 걷는 것”을 위한 것이 아니라 단지 (당연하게도) “동등한 인간”으로 인식되기 위함이다. “흑인 여성 해방을 위해 지속적으로 노력하며 우리에게 충분히 관심을 갖는 이들은 우리 자신밖에 없”다는 통렬한 현실 인식은 이들로 하여금 다른 누구도 아닌 흑인 여성이 겪는 특수한 억압의 경험을 동력으로 삼는 정치를 고안하게 만들었다. 이들은 최근 많은 이들이 정체성 정치에 대해 오해하고 있는 것처럼 흑인 레즈비언 페미니스트로서 자신들의 정체성만이 중요하고 다른 이들의 정체성은 알아서 하라는 식으로 말하지 않았다. 오히려 그것과는 반대로 이들은 인종 억압, 성 억압, 계급 억압과 같은 “모든 억압이 우리 삶에 동시에 들이 닥친다”는 것을 강조하며 각각의 억압이 어떻게 다른 억압과 분리 불가능하게 연결되어 있으며 그러므로 흑인 여성이 어떻게 다른 억압 받는 존재들과 분리 불가능하게 연결되어 있는지를 역설했다.
이들은 특히 (레즈비언들이 이성애자 여성, 남성과 극단적으로 단절할 필요가 있다고 말하는) 레즈비언 분리주의에 대해 다음과 같이 말한다. “분리주의는 너무나 많은 것과 사람들, 특히 흑인 남성들, 여성들, 어린이들을 뒤에 남겨둔다. (...) 분리주의는 다른 어떤 것은 무시한 채, 계급과 인종의 문제를 부정하고 여성 억압의 성적 근원만을 찾기 때문이다.” 이들은 “동시에 작동하는 억압들” 중 “다른 어떤 것”도 무시하지 않고자 했고, 오히려 “여성들, 제3세계인들, 노동자들의 삶에 지장을 주는 어떤 상황이든 개입”하려 했다. 『오인된 정체성』이라는 제목의 정체성 정치를 중심에 둔 대중 운동을 비판한 저서를 쓴 아사드 하이더는 당시 컴바히강 콜렉티브에게 정체성 정치란 무엇이었는가를 질문하며 다음과 같이 쓴다. “컴바히강공동체에게 (...) [정체성 정치는] 정치가 정치와 연관된 개인들의 구체적 정체성들로 환원되어야 한다는 것을 의미하지 않았다.” 다시 말해 이들은 흑인 레즈비언 페미니스트라는 개인의 특수한 권리를 위해 정체성 정치를 고안한 것이 아니다. 오히려 흑인 레즈비언 페미니스트에게 가해지는 억압의 교차성을 분석함으로써 주류적인 프레임과는 다른 방식으로 소수자들에게 가해지는 억압을 이해하고 또한 그러한 이해에 뒤따르는 저항적 실천을 구상하기 위해 정체성 정치를 고안한 것이다. 아사드 하이더는 컴바히강 콜렉티브의 창립 멤버이기도 한 데미타 프레이지어의 회고를 인용한다. “저는 컴바히강공동체나 제가 참여했던 여타 흑인 페미니스트 단체가 오로지 우리 흑인 여성에게 관심있는 이슈만 주목해야 한다거나 레즈비언/바이섹슈얼 여성으로서 오로지 레즈비언의 이슈만 주목해야 한다고 전혀 믿지 않았습니다. 정말 중요하게 기록해야 할 점은, 컴바히강공동체가 한 지역에 가정폭력을 겪은 여성들을 위한 대피처를 세우는 일에 공헌했다는 점입니다. (...) 저희는 스스로 살아남으려면 연대를 구축하는 일이 필수적이라고 이해했습니다.” 살아남기 위해 다른 이들에게 “대피처”를 제공하는 것이 “필수적”이라는 이해 혹은 감각은 정체성 정치라는 프로젝트가 처음부터 자신의 정체성을 넘어서는 더 큰 것, 이를테면 ‘공동체’라는 정치적 이상을 실현하기 위한 수단이자 도구로서 고안되었음을 증거한다. 나는 그의 말을 이렇게 다시 읽는다. 정체성은 ‘나’가 아니라 타자를 위한 “대피처”로 사용될 수 있을 때에만 유용한 것이다.
이 지점에서 정체성 정치의 위험과 쓸모를 일찌감치 생산적으로 논의한 더글라스 크림프의 글인 「당신에게 동의해요, 걸프렌드」를 언급할 필요가 있겠다. 1991년 쓰여진 이 글은 『셀룰로이드 클로젯 Celluloid Closet』이라는 저서로 유명한 활동가이자 영화사가 비토 루소의 장례식에서 출발해 동성애자 권리 운동의 역사와 전략을 거쳐 <양들의 침묵>의 동성애 혐오적 재현과 해당 영화의 주연인 조디 포스터에 대한 레즈비언과 게이의 상반된 반응을 예리하게 분석한 뒤 종국에는 ‘정치적 동일시’에 기반한 정체성 정치의 가능성을 제시하는 결론으로 끝맺는다. (유명인 클로짓 동성애자를 아웃팅 시키는) ‘아웃팅 outing’과 (동성애 혐오 표현에 맞서 적극적으로 대항하는) ‘맞받아치기 bashing back’라는 동성애자 운동의 전형적인 저항 전략을 넘어서, 그리고 (조디 포스터에 대한 상반된 반응으로 요약되는) 게이와 레즈비언의 본질적이라 가정되는 입장 차이를 넘어서, 퀴어 정치학은 어떤 방식으로 공동의 전선을 그릴 수 있을 것인가? 다시 말해, 섹슈얼리티에 기반한 정체성 정치가 아닌 정치적 목표에 기반한 정체성 정치가 가능한가? 그는 어떤 연예인이 레즈비언인지 아닌지를 두고 열띤 가십을 나누는 부치들을 예시 삼아 정체화란 언제나 타자와의 동일시라는 점을 언급하며 다음과 같이 말한다. “정체성은 단순히 자신을 긍정하는 선언이 아니다. 정체성은 언제나, 타자와 맺는 하나의 관계다. (...) 정체성이 언제나 관계적이라는 점에 주목한다면, 우리는 정체성 정치를 고정된 정체성에 기반한 정치가 아니라 관계적 정체성들에 기반한 정치로 다시 말들어나갈 수 있다. 또는 끊임없이 자기 자신을 다시 만드는 정치적 동일시들을 통해 형성되는 정치로 새롭게 사고할 수 있다.” 정체화는 자신이 누구인지를 선언하는 것 그 이상이다. 정체화는 내가 동일시하고 있는 누군가를 언제나 포함하고 있다. 정체화를 통해 우리는 (자각하든 말든) 누군가의 편에 이미 서고 있는 것이다. 그렇다면 이런 역동을 “우리 내부의 적대를 악화”하기 보다 “연대를 확장하기 위해” 쓸 수는 없을까? 그가 제시한 정치적 동일시를 통한 정체화란 내가 어떤 성적 지향과 어떤 성적 정체성을 가졌느냐와 상관없이 누구와 동일시할 것인가, 그러므로 누구와 함께 세상에 맞서 싸울 것인가를 선택할 수 있다는 관점을 기반에 두고 있다. 퀴어라는 정체성은 바로 이런 지점에서 “새로운 정치적 정체성”으로 등장했다. 모든 억압에 맞서 싸울 준비가 된 누구든 환영하는 이 이름은 단순히 게이, 레즈비언, 트랜스젠더와 같은 성적 소수자의 총합을 뜻하지 않는다. 퀴어가 된다는 것은 자신의 (본질적인) 비규범적인 섹슈얼리티를 긍정한다는 것이 아니라 (오히려 본질주의로부터 벗어나서) 퀴어로서 다른 소수자들과 연대할 것을 ‘선택’한다는 뜻이다. 컴바히강 콜렉티브를 포함한 흑인 해방 운동과 페미니즘 운동과의 동일시를 통해 동성애자 권리 운동이 출현할 수 있었다고 말하는 그는 정체성이 누군가의 소유물이 아니라 억압받는 모든 이들에게 제공되어야 할 “피난처”임을 분명히 알고 있었다. 우리는 “살아남기 위해” 우리 자신이 아닌 것이 될 필요가 있다.
2.
오늘날 정체성 정치에 대해 비판적인 의견을 개진하는 것은 (특히 기계적 정치적 올바름을 고수하는 이들로부터의 강도 높은 비난을 감수해야 한다는 점에서) 쉬운 일이 아니다. 맨 처음 정체성 정치와 행동주의 예술에 대한 글을 써 달라는 청탁을 받았을 때, 나는 이런 상황을 충분히 의식하면서 우회적인 방식으로 정체성 정치에 대한 불만을 풍기는 글을 쓸 수 있을 것이라 생각했다. 요컨대 정체성 정치는 이러이러한 한계가 있지만, 그래도 (누군가가 정치적 주체로 거듭나기 위해서는) 반드시 필요한 도움 닫기라는 식으로 말이다. 하지만 이러한 전략적 거리두기는 내게 불가능한 것으로 밝혀졌다. 특히 오늘날 페미니스트들이 자신과 다른 정체성이라 여겨지는 이들을 타자화하는 방식을 보고 있자면 말이다. 페미니스트가 된다는 것은 (그것도 주로 중산층) 여성의 문제를 제외하고 다른 어떤 문제에도 상관하지 않겠다는 뜻인가? 마찬가지로 퀴어가 된다는 것은 자신이 얼마나 특별한지를 의미하는 수식일 뿐인가? 누가 ‘우리’라는 공동체에 들어올만한 자격이 있는지 없는지를 판단하는 일이 언제부터 페미니스트와 퀴어의 주된 업무가 되었나? 상황은 모두에게 점점 더 어려워지고 있는데, 정체성은 모두에게 점점 더 한계처럼 작용하는 것처럼 보인다. 주지하다시피 2016년 이후 집단적으로 각성한 비교적 젊은 세대의 페미니스트들은 (특히 생물학적) 여성의 경험을 통해 여성이라는 정체성의 특수한 내용을 구성하려고 노력해왔다. 이러한 노력은 고집스럽게 자신의 근원을 실증적이라 가정되는 여성 범주에 위치시키려는 의지와 관련된다. 민족이나 인종과 마찬가지로 성별 역시도 실제로 누군가에게 귀속된 개념이 아니라 사회적이고 역사적으로 축적된 이해 관계와 이데올로기의 효과라는 사실은 종종 이런 의지에 의해 잊혀진다(“결국 이론(들) 없이 인종주의는 존재하지 않는다. (...) 더 중요한 것은 학문적 인종주의의 이론화가 인종이라는 기표를 둘러싸고 제도화되는 공동체의 결정화 속에서 완수하고 있는 기능에 대해 묻는 것이다”). 성별은 그 자체로 중요한 것이 아니라 그것을 중요한 것으로서 재구성하는 테크놀로지 때문에 중요해진다(“젠더는 재현이다. 이는 젠더가 개인의 물질적 삶에 구체적이고 실재적인 함의를 지니지 않는다는 뜻이 아니다. 오히려 그 반대로 젠더의 재현은 곧 젠더의 구성이다.”). 나는 지금 특정한 민족이나 인종, 성별에 가해지는 차별과 혐오가 허상이라고 말하는 것이 아니라 바로 그런 차별과 혐오가 재생산하는 상상적인 근원을 초역사적인 실재로서 받아들일 필요가 없다고 말하려는 것이다. 이들은 물론 분리가 불가능한 것처럼 묶여 있기에 따로 떼어내서 생각하기란 쉽지 않다. 또한 우리는 지배 구조에 항거하는 대부분의 운동이 끓는 점을 돌파하기 위해 학대받고 착취당한 그들의 먼 조상을 상상해낼 필요가 있다는 것도 알고 있다. 최근 국내 퀴어 예술가들이 ‘퀴어’라는 이름이 수입되기 전 이미 일탈적 섹슈얼리티를 실천하고 있던 ‘선배’들을 찾아 나서는 것도 바로 이런 대항 역사에 대한 의지 때문일 것이다.
소수자들에 대한 그 어떤 권리나 심지어 보호조차 적절히 보장하지 못하는 2023년의 한국에서 정체성 정치에 대한 이런 비판적인 관점은 어쩌면 그 자체로 시기상조인 것으로 느껴질 수도 있다. 하지만 바로 그런 시기이기에, 다시 말해 특정 정체성으로 호명될 수 있는 집단 뿐만 아니라 아직 이름이 없는 다른 모든 소수자들 역시 마찬가지로 그들의 권리를 보장받지 못하는 시기이기에 우리는 더더욱 정체성 정치를 실용적인 관점에서 재고할 필요가 있다. 왜 우리에게 정체성이 필요한가? 정체성은 내가 받은 고통을 보다 정밀하게 세공하기 위해서가 아니라 내가 모르는 다른 이들의 고통과 만나기 위해서 고안된 도구다. 그것은 소유되기 위해서가 아니라 사용되기 위해서 발명된 개념이다. 또한 정체성은 아직 자신의 고통을 설명할 언어가 없는 누군가를 기다리고 있는 텅 빈 장소다. 그리고 그 장소는 일시적인 “피난처”일 뿐 처음부터 그 장소의 소유자로서 정통성을 승계받은 이가 자비롭게 개방한 공동 주거 단지가 아니며 그렇게 되어서도 안된다. 정체성을 우리 자신과 분리한다고 하더라도 우리에게는 여전히 경험이 남는다. 그러나 우리의 경험은 정체성을 정의하기 위해서가 아니라 우리라는 범주를 확장하기 위해서 사용되어야 한다. 우리는 우리의 경험을 약자의, 피해자의, 특정한 정체성을 이루는 조건으로 한계 지으라는 압력에 저항할 필요가 있다. 우리가 만약 우연한 마주침의 가능성을 상상하지 못하고 우리와 비슷해 보이는 이들만 환영하게 된다면 가장 기뻐할 이들이 누구일지 생각해보라. “적이 누구인지 기억하라.” 정체성이 우리 사이의 구획을 더 촘촘하게 나누고 강화하는데에만 일조한다면, 어쩌면 우리는 더 잘 싸우기 위해서 차라리 그것들을 버려야 할지도 모른다. 한때 우리 자신을 지키는 무기였던 그것이 본래 우리의 소유가 아님을, 다른 이들을 위한 “피난처”의 재료임을 알지 못한다면 이것들은 우리를 억압하는 이들의 무기와 아무런 차이도 없는 그저 무기일 뿐이다. 이쯤해서 나는 내가 최근 재미있게 보고 있는 유튜브 채널의 영상 중 한 부분을 언급하려 한다. 알다시피 오늘날 퀴어 정체성에 기반한 행동주의 예술에서, 특히 그것의 교육적인 측면에서 가장 실재적 위험을 감수하고 있는 이들은 바로 유튜브에서 얼굴을 드러내고 활동하는 창작자들이다. 심지어 위험을 감수한다고 해서 그것이 무조건 교육적인 효과를 거두는 것도 아니다. 그러므로 퀴어 예술가인 이반지하와 같은 인물들은 이 분야에서 대표적일 뿐만 아니라 예외적이라 할 수 있다. 그의 다음 세대라 할 수 있을, 드랙킹 퍼포머로 활동하던 ‘아장맨’이 개설한 유튜브 채널인 ‘아장쥐’는 그가 레즈비언 하위문화에서 겪은 경험과 감정을 유머러스하게 이야기하는 것을 주된 소재로 삼고 있다. 최근 게시된 <젊은 보수비언 (보수적인 레즈비언)의 슬픔>이라는 영상에서 그는 그가 매우 보수적인 편이라고 밝히며 20대 중반까지 비규범적인 외모와 성향의 타인들에게 편견을 가지고 있었다고 고백한다. 그러나 그의 고백은 단순히 (보수적인 성향을 가진) 레즈비언들을 전형화하는 것에 멈추지 않고 정체성 정치에 대한 은근한 비판으로 나아간다. “그러니까 뭐 정체화 하셨거나 아니면은 정체화 한지 오래 되셨거나 하시는 퀴어 분들! 퀴어라고 해가지고 이제 “내가 남들보다 많이 알아~” (비퀴어들한테) “니가 뭘 알아! 내가 남들보다 시야가 얼마나 넓은 줄 알아!?” 이렇게 자만하면 안 되고, 언제나 스스로의 빻은 면모를 발견해 가지고 이제 발견하면 두들겨 패 가지고 함께 저희 함께 모두와... 커뮤니티의 모두와 지지하고 연대하고 사랑해 가면서 살아야 된다 이거죠...” 누군가에게는 거슬릴 수도 있는 이 메시지를 전하기 위해 ‘아장맨’이자 ‘아장쥐’는 특유의 과하게 여성스러운 말투와 제스처를 ‘수행’해가며 시청자들에게 자신의 경험을 하나의 전형화된 ‘보수적인 레즈비언’의 예시로서 제공한다. 그에게 정체성이란 자신의 “빻은 면모를 발견”하게 해줄 계기가 될 수는 있어도 그런 면모를 자동적으로 해소시켜 줄 정답을 되지 못한다. 그는 정체성을 통해서 무엇보다 “모두와 지지하고 연대하고 사랑해 가면서 살아야 된다”는 것을 배운다. 도대체 그러지 못한다면 정체성이라는 것이 무슨 쓸모가 있을까?
인용 출처:
「컴바히강 콜렉티브 선언문 The Combahee River Collective Statement」를 가리킨다. 한국어 번역은 아래 도서를 참조했다. 브리앤 파스 엮음, 양효실, 이라영, 이진실, 한우리, 황미요조 역, 『우리는 다 태워버릴 것이다: 페미니즘 매니페스토, 폭발적으로 저항하는 언어들』 중에서 「컴바히강 집단 선언문」, 바다출판사, 2021, 435-448p. 아래 직접 인용은 별 다른 표시가 없는 한 모두 해당 텍스트에서 가져온 것이다.
하사드 하이더, 권순욱 역, 『오인된 정체성』, 두번째테제, 2021, 28p.
Demita Frazier, 「Rethinking Identity Politics」, Sojourner(september 1995), 12p. 『오인된 정체성』에서 재인용, 30p.
더글라스 크림프, 김수연 역, 『애도와 투쟁』 중에서 「당신에게 동의해요, 걸프렌드!」, 현실문화, 2021, 231-267p. 아래 직접 인용은 별 다른 표시가 없는 한 모두 해당 텍스트에서 가져온 것이다.
에티엔 발리바르, 이매뉴얼 월러스틴, 김상운 역, 『인종, 국민, 계급: 모호한 정체성들』, 두번째테제, 2022, 67p.
테레사 드 로레티스, 에일(페미니즘 번역모임) 역, 「젠더의 테크놀로지」, https://en-movement.net/195
<헝거 게임: 캣칭 파이어>에 나온 대사를 마크 피셔가 인용한 것이다. 마크 피셔, 박진철, 리시올 편집부 역, 「적이 누구인지 기억하라」, https://playtime.blog/2019/03/03/적이-누구인지-기억하라/
아장쥐, <젊은 보수비언 (보수적인 레즈비언) 의 슬픔>, https://www.youtube.com/watch?v=ywbkH_TK2Is
[출처] 왜 정체성을 소유하는 대신 사용해야 하는가?|작성자 리타
0 notes
Text
아래는 성공적인 스타트업 창업자로 성장하기 위해 2025년을 기준으로 갖춰야 할 역량을 5개 영역으로 나누어 정리한 종합 지침서다. 각 영역별로 목표, 실행 가이드라인, 구체적 도구 활용 방안, 측정 가능한 핵심 지표(KPI), 그리고 참고 도서 및 교육 자료를 제시한다. 이를 통해 체계적으로 역량을 향상시키고 스타트업 경영에 필요한 전반적 능력을 내재화할 수 있다.
1. 비즈니스 모델 이해 능력 (Business Model & Market Strategy)
목표
• 고객 가치 제안(Value Proposition)을 명확히 하고, 반복 가능한 수익모델을 구축
• 데이터 기반의 시장 분석을 바탕으로 공략할 고객 세그먼트 및 성장 전략 수립
실행 가이드라인
1. 핵심 가치 정의:
• Value Proposition Canvas 활용해 고객 페인포인트 및 해결책 문서화
• 최소 10~20명의 고객 인터뷰를 통해 가설 검증
• 랜딩 페이지, 이메일 캡처 등 간단한 실험으로 초기 관심도 측정
2. 시장 분석:
• TAM/SAM/SOM 분석으로 시장 규모 수치화
• 경쟁사(3~5곳) SWOT 분석 및 가격 정책 파악
• 고객 세그먼트별 진입전략 수립
3. 성장 전략 수립:
• AARRR(Pirate Metrics) 지표로 온보딩→활성화→유지→추천→수익화 단계별 전략 설계
• 초기 MVP 출시 후 피드백 반복 수렴, 데이터 기반 기능 개선
활용 도구
• 기획/아이디어 검증: Business Model Canvas, Value Proposition Canvas(Strategyzer)
• 시장 데이터 조사: Crunchbase, Statista, Gartner Reports
• 고객 인터뷰·설문: Typeform, Zoom
• 초기 실험 용 랜딩 페이지 제작: Carrd, Launchrock
핵심 지표(KPI)
• 랜딩 페이지 전환율(목표: 10% 이상)
• 인터뷰 피드백 중 가치제안 명확성 동의율(70% 이상)
• TAM/SAM/SOM 분석 보고서 완성도
참고 도서/자료
• 도서: Business Model Generation (Alexander Osterwalder), Blue Ocean Strategy (W. Chan Kim & Renée Mauborgne)
• 온라인 과정: Strategyzer Academy, Udemy의 “Market Research” 과정
2. 기술 트렌드 및 디지털 리터러시 (Technology & Digital Literacy)
목표
• AI, 클라우드, 데이터 분석 등 핵심 기술 트렌드를 이해하고, 개발팀과 소통 가능
• 데이터 기반 의사결정과 협업 툴 활용 능력 강화
실행 가이드라인
1. 핵심 기술 이해:
• AI/ML, 블록체인, 클라우드 서비스 작동 원리 기본 이해
• 경쟁사나 선도 기업 기술 스택 분석
2. 협업 및 분석 툴 활용:
• Google Analytics, Mixpanel로 고객 행동 데이터 측정
• Figma를 통한 기본 UI/UX 프로토타이핑
• Slack, Notion, Asana/Jira 등 협업툴로 업무 효율 증대
3. 보안 및 프라이버시 대응:
• GDPR, CCPA 등의 데이터 규제 개요 파악
• MVP 단계서 OWASP Top 10 취약점 검토
활용 도구
• 데이터 분석: Mixpanel, Amplitude, Google Analytics
• 프로토타이핑: Figma
• 협업 툴: Slack, Notion, Asana, Jira
• 보안 가이드: OWASP Top 10, GDPR 체크리스트
핵심 지표(KPI)
• 기술 관련 개념(기술용어 20개 이상) 명확히 정의 가능
• KPI 대시보드 구축 및 주기적 업데이트(매주/매월)
• MVP 보안 취약점 5개 이상 사전 개선
참고 도서/자료
• 도서: The Innovator’s Dilemma (Clayton M. Christensen)
• 온라인 강좌: Coursera, Udemy의 AI/ML 기초 강의, AWS/GCP/Azure 무료 트레이닝
• 정보 사이트: TechCrunch, Gartner, CB Insights
3. 자본 조달 및 재무 관리 역량 (Finance & Fundraising)
목표
• 재무제표 분석과 유닛 이코노믹스(Unit Economics) 이해
• ��양한 투자자 대상 피치와 투자 조건 협상 능력 보유
• 예측 가능한 자금흐름 관리 및 리스크 대응 계획 수립
실행 가이드라인
1. 재무 기초 이해:
• 손익계산서, 현금흐름표, 대차대조표 읽기
• CAC, LTV, Burn Rate 등 핵심 재무 지표 산정
2. 투자 유치 전략:
• 피치덱 10~15장 작성(비전·시장·제품·재무·팀)
• 최소 5개 투자자 미팅, 피드백 반영 후 수정
• Term Sheet 기본 조건 이해 및 지분 희석 계산
3. 리스크 관리:
• 낙관/중립/비관 시나리오별 재무모델링
• 비용 최적화 전략(클라우드 비용, 마케팅 비용 절감) 실행
활용 도구
• 재무모델 템플릿: Excel, Google Sheets
• 투자자 정보 수집: Crunchbase, PitchBook, AngelList
• 피치덱 참고자료: Airbnb, Uber 초기 피치덱 예시
핵심 지표(KPI)
• LTV:CAC 비율 3:1 이상
• Pre-Seed 또는 Seed 라운드 목표 자금 조달률 100% 달성
• 현금 소진율 관리로 최소 6~12개월 운영 가능 기간 확보
참고 도서/자료
• 도서: Venture Deals (Brad Feld & Jason Mendelson), Angel (Jason Calacanis)
• 온라인 과정: Y Combinator Startup School, Udemy의 “Startup Finance” 강의
4. 조직 운영 및 리더십 (Organization & Leadership)
목표
• 핵심 인재 확보 및 성과 중심 문화 조성
• 명확하고 투명한 커뮤니케이션으로 팀 몰입도 제고
• 애자일/린(Lean) 프로세스로 효율적인 실행력 확보
실행 가이드라인
1. 팀 빌딩 및 문화 형성:
• 코파운더 역할/책임 문서화
• 핵심 직무별 채용 완료
• OKR 기반 성과 관리 도입(분기별 리뷰 세션)
2. 리더십 커뮤니케이션:
• 분기별 비전 공유 올핸드 미팅
• 갈등 발생 시 72시간 내 1:1 소통 프로세스
• 분기 1회 팀 빌딩 워크숍 진행
3. 프로세스 관리:
• Agile/Scrum 스프린트 운영
• 마일스톤/Gantt 차트로 일정 관리
• 월별 OKR 리뷰 및 개선안 도출
활용 도구
• 채용/인재 관리: Greenhouse, Lever(ATS)
• OKR 관리: Notion, WorkBoard
• 프로젝트 관리: Asana, Jira, Trello
핵심 지표(KPI)
• 팀 만족도 조사 80% 이상 긍정
• 스프린트 내 달성률 80% 이상
• 직원 이직률 연 10% 이하 유지
참고 도서/자료
• 도서: High Output Management (Andrew S. Grove), Radical Candor (Kim Scott)
• 온라인 과정: Coursera의 “Leading Teams” 강의, LinkedIn Learning 리더십 과정
5. 마케팅 및 브랜드 전략 (Marketing & Branding)
목표
• 효율적인 고객 획득 전략 수립 및 채널 믹스 최적화
• 브랜드 스토리텔링을 통해 충성 고객 확보
• NPS, CAC, 전환율 등 지표 개선 기반의 성장 마케팅
실행 가이드라인
1. 고객 획득 전략:
• Google Ads, Facebook Ads, LinkedIn Ads 등 채널 테스트
• A/B 테스트 통해 랜딩 페이지 전환율 개선(목표: 10%→15%)
• SEO 전략 수립(키워드 5개 선정), 콘텐츠 마케팅 실행
2. 브랜드 스토리텔링:
• 브랜드 미션/비전/가치 핵심 키워드 3~5개 문서화
• 월 2회 이상 블로그/미디엄 포스팅, 업계 웨비나 참여
• 고객 커뮤니티(Discord, Slack) 개설, 정기 Q&A 세션
3. 고객 경험(CX) 관리:
• NPS 분기별 조사, VOC(고객 불만) 데이터베이스화
• 고객 여정맵(Journey Map)으로 이탈 포인트 개선
활용 도구
• 광고/마케팅 분석: Google Ads, Facebook Ads Manager, Ahrefs, SEMrush
• 이메일 마케팅: MailChimp, Klaviyo
• VOC 관리: Intercom, Zendesk
핵심 지표(KPI)
• CAC 목표치 달성(예: 고객 1명당 20달러 이하)
• SNS 팔로워 월 10% 증가, 블로그 포스팅 조회수 1000회 이상
• NPS 30점 이상, 고객 불만사항 1분기 내 20% 감소
참고 도서/자료
• 도서: This Is Marketing (Seth Godin), Contagious (Jonah Berger)
• 온라인 과정: HubSpot Academy, Udemy의 “Growth Marketing” 강좌
종합적으로, 위 지침은 2025년 기준으로 성공적인 스타트업 창업자가 갖추어야 할 역량을 체계적으로 제시한다. 이 로드맵을 따라가면서 각 영역별 지식 습득, 실행 전략 수립, 도구 활용, 성과 지표 개선, 참고 도서/교육자료 학습을 병행하면, 현실적인 시장 검증과 확장 가능한 성장 전략을 기반으로 한 지속 가능한 스타트업 경영이 가능해질 것이다.
0 notes
Text
아래는 앞서 제안한 C.U.R.V.E.S. 프레임워크 각 단계를 보다 깊이 있게 해설하고, 실행 시 고려할 수 있는 구체적 방법, 사례, 활용 툴 등을 추가적으로 제시한 것입니다. 이 프레임워크는 단순한 이론적 개념을 넘어 실제 조직, 연구팀, 개인 연구자, 디자인 스튜디오, 기술 스타트업 등 다양한 상황에서 적용 가능하도록 만든 가이드라인입니다.
C (Contextualizing Unknowns): 맥락화하기
목표:
• 우리가 알고 있는 것, 알고 있다고 ‘착각’하는 것, 명시적으로 모른다고 인지하는 것, 그리고 아직 인식조차 못하는 미지의 영역들을 구분한다.
• 이를 통해 현재 지식 생태계의 지형을 파악하고, 잠재적으로 파고들 수 있는 ‘미지의 틈새’를 찾는 기반을 만든다.
실행 방법 및 고려사항:
• 지식 매핑(Knowledge Mapping):
• 화이트보드나 Miro, Mural 등 온라인 협업 툴을 사용해 해당 분야의 주요 개념, 이론, 통념, 미해결 문제, 밝혀진 한계 등을 시각적으로 배치한다.
• 예: 신약 개발 팀이라면, 현재 치료법, 알려진 부작용, 기존 연구결과, 환자 니즈, 시장 데이터 등 기존에 ‘알려진’ 요소들을 지도화한다.
• 지식 격차 분석(Gap Analysis):
• 문헌 리뷰, 특허 조사, 시장 분석 보고서 등을 통해 현재까지 확보된 지식의 범위를 정리하고, 그 틈(gap)에 집중한다. 이 틈새 중 일부는 이미 ‘알고 있는 모르는 것(Known Unknown)’이고, 또 다른 일부는 아직 전혀 인식되지 않은 가능성(Unknown Unknown)을 잠재적으로 품고 있다.
• 이해관계자 인터뷰(Stakeholder Interviews):
• 팀 내 전문가, 다른 분야 전문가, 또는 사용자의 관점을 청취한다. 이러한 대화 과정에서 ‘이런 문제는 왜 고려하지 않았지?’ 혹은 ‘이런 관점도 있을 수 있구나’ 하고 전혀 새로운 인식 지점을 발견할 수 있다.
U (Unpacking Assumptions): 가정 해체하기
목표:
• 지금까지 해당 분야나 문제 영역에서 당연시되어 온 가정, 편견, ‘상식’을 식별하고 이를 비판적으로 검토한다.
• 이로써 불문율처럼 굳어진 틀을 깨고, 새로운 의문이나 접근 가능성을 확보한다.
실행 방법 및 고려사항:
• 가정 목록화(Assumption Listing):
• 팀원들이 분야 내 흔히 통용되는 가정(예: “이 문제는 기술적으로 해결 불가능하다”, “고객들은 절대 이런 기능을 원하지 않는다”)을 포스트잇에 적어 벽에 붙인다.
• 이후 각 가정에 대해 “왜 이것이 사실이라고 생각하는가?”, “만약 이것이 틀렸다면?” 같은 질문을 던진다.
• 가정 전복하기(Assumption Inversion):
• 의도적으로 가정을 반대로 뒤집어본다. 예를 들어, “고객은 이 기능을 싫어한다”라는 가정을 뒤집어 “사실 고객은 이 기능을 무척 좋아할 수도 있다”라고 하고, 이를 뒷받침하거나 반박하는 근거를 탐색한다.
• 이런 과정에서 전혀 인식하지 않았던 니즈나 문제 지점을 포착할 수 있다.
• 비판적 토론(Critical Debate) 세션:
• 외부 전문가를 초빙하거나 팀 내에서 반론 전담 역할(Critical Friend)을 맡은 사람이 기존 관점과 가정에 끊임없이 이의를 제기하도록 한다.
• 이를 통해 ‘기존 패러다임 바깥’에 존재하는 미지 영역을 촉발하는 단서를 얻는다.
R (Reframing Questions): 질문 재구성하기
목표:
• 기존에 알고 있는 문제를 새로운 각도에서 바라보고, 아직 ‘문제로 규정되지 않은’ 잠재적 영역을 질문 형태로 형상화한다.
• 알려지지 않은 미지를 ‘질문’이라는 매개를 통해 인식 범위 안으로 끌어들인다.
실행 방법 및 고려사항:
• 질문 스프린트(Question Sprint):
• 디자인 씽킹 기법, 브레인스토밍 기법을 활용해 특정 주제에 대해 가능한 한 많은 질문을 짧은 시간 안에 쏟아낸다. 질의응답이 아닌 오로지 질문만 생산하는 세션을 통해, 틀 밖의 의문점을 발견한다.
• 키워드 변형(Keyword Variation):
• 문제 진술문을 다른 단어, 다른 맥락으로 바꿔보는 연습을 통해 전혀 새로운 질문을 도출한다. 예: “사용자의 만족도를 높이려면?”을 “사용자가 전혀 예상하지 못한 가치를 제공하려면?”으로 변경해본다.
• 미래 예측(Foresight)과 시나리오 플래닝(Scenario Planning):
• 미래 관점에서 문제를 바라보며, 현재 아직 드러나지 않은 이슈(미지)가 미래 상황에서 어떻게 핵심 문제가 될지 가상 시나리오를 통해 모색한다.
V (Venture into Speculative Exploration): 추측적 탐색 시도하기
목표:
• 아직 검증되지 않고 명확하지 않은 영역에 대해 프로토타입, 파일럿 프로젝트, 실험 연구 등을 통해 직접적·간접적으로 접근해본다.
• ‘할 수 있을까?’ 하는 질문에서 한 발 나아가 ‘일단 시험해보자’는 태도로 미지 영역을 경험적으로 드러낸다.
실행 방법 및 고려사항:
• 빠른 프로토타이핑(Rapid Prototyping):
• 제품 개발 분야라면 Mock-up, 3D 프린팅, UX Wireframe 등 최소한의 형태로 아이디어를 구현해본다.
• 지식 연구 영역이라면 가설 기반의 초도 실험, 리서치 파일럿 프로그램 등을 돌려보며 전혀 새로운 데이터를 얻는다.
• 인터디서플리너리 콜라보(Interdisciplinary Collaboration):
• 생물학자와 디자이너, 엔지니어와 사회학자 등 이종 분야 전문가와 협업할 때, 기존에 전혀 생각하지 못한 질문과 답이 떠오른다.
• 실패 용인 문화(Fail-Fast Culture):
• 실패해도 책임추궁보다는 ‘이 실패를 통해 무엇을 몰랐는지 알게 되었는가?’를 묻는 문화 조성.
• 이런 문화는 미지에 대한 탐색 시도를 장려하고, 결과적으로 무지 관리 역량을 강화한다.
E (Evaluate and Reflect): 평가 및 반성하기
목표:
• 시도, 실험, 탐색 과정을 통해 새롭게 드러난 통찰, 놓친 영역, 여전히 모호한 부분들을 재정리한다.
• 무엇을 알게 되었는지, 여전히 모르는 것이 무엇인지, 새로 생긴 질문은 무엇인지 점검한다.
실행 방법 및 고려사항:
• 학습 저널(Reflective Journal) 작성:
• 각 팀원 혹은 R&D 팀 전체가 탐색 과정에서 얻은 통찰, 흥미로운 실패 사례, 새로 제기된 의문 등을 기록한다.
• 주기적으로 이 저널을 돌아보며 발전 상황을 평가한다.
• 메타인지 세션(Metacognition Session):
• ‘우리가 지금 무엇을 알았고, 무엇을 전혀 몰랐나?’를 논의하는 전용 회의를 갖는다.
• 이를 통해 미지 영역을 더 선명하게 정의하고, 다음 단계 행동 전략을 수립한다.
• 피드백 루프(Feedback Loop):
• 외부 전문가, 고객, 사용자, 학계 관계자 등 다양한 이해관계자에게 탐색 결과를 공유하고 의견을 수집한다.
• 이 피드백은 새로운 의문점을 불러일으킬 수 있으며, 미지 관리의 다음 사이클로 이어진다.
S (Sustain and Institutionalize): 제도화하고 지속하기
목표:
• 무지 관리 프로세스를 일회성 이벤트로 그치지 않고, 조직 문화나 연구 문화로 내재화한다.
• 지속 가능한 혁신 파이프라인을 구축하여, 새로운 미지영역이 등장할 때마다 자연스럽게 탐색하고, 질문하고, 실험하는 사이클을 유지한다.
실행 방법 및 고려사항:
• 정기적 “무지 포럼” 운영:
• 분기별 또는 반기별로 조직 내 ‘무지 포럼’을 열어, 탐색 중인 영역, 아직 명확히 다뤄지지 않은 질문, 신기술 등장으로 인한 새로운 미지영역 등을 논의한다.
• R&D 포트폴리오에서 탐색 프로젝트 비중 유지:
• 전체 R&D나 혁신 포트폴리오 중 일정 비율을 ‘순수 탐색형’으로 할당하여, 당장 결과가 불확실하더라도 장기적 관점에서 미지관리 활동을 지속한다.
• 보상 및 평가 시스템 정비:
• 혁신적인 질문 제기, 블라인드 스팟 발견, 실패를 통해 새로운 통찰을 얻은 사례에 인센티브를 부여한다.
• 이런 인사정책을 통해 팀원들이 적극적으로 미지를 탐색하도록 유도한다.
종합적 해설:
이 프레임워크는 단순히 ‘모르는 것’을 확인하는 것을 넘어, 그 몰랐던 영역을 인식 가능한 형태(질문, 가설, 프로토타입, 시나리오)로 전환하고, 이를 기반으로 새로운 지식과 혁신을 창출하도록 유도한다. 또한 이 과정 자체를 조직이나 개인이 반복하고 개선하는 문화로 자리매김함으로써, 끊임없이 변화하는 환경에서 지속적으로 새로운 통찰을 확보하는 ‘학습하는 조직(learning organization)’으로 진화하게 한다.
최종적으로, C.U.R.V.E.S. 프레임워크는 알려지지 않은 미지(Unknown Unknowns)를 의식적으로 다루고, 무지(Ignorance)를 부정적으로 보지 않고 적극적으로 관리하는 실행 가이드로 작동하여, 새로운 혁신 창출, 창의적 문제해결, 미래지향적 전략 설정에 기여할 수 있다.
0 notes
Text
이전에 다양한 분야를 제안한 이유는, 지식이라는 개념이 단순히 한두 영역에 국한되지 않고, 인간 활동 전반—문화, 사회, 과학, 기술, 예술, 제도, 생명환경—과 얽혀 있음을 강조하기 위함이었다. 각각의 분야는 지식이 어떻게 생산되고, 검증되고, 전승되며, 활용되는지를 고유한 방식으로 보여준다. 이를 통해 지식이란 단순히 “사실의 집합”이 아니라, 복잡한 상호작용, 의미 부여, 해석 과정, 제도적 맥락, 기술적 매개, 감정적·윤리적 판단이 모두 결합된 “생태계적 현상”임을 드러내고자 한 것이다.
프레임워크-방법론 제안
이제 이 다양한 분야와 관점을 하나의 구조적 틀로 정리할 수 있는 일종의 메타 프레임워크-방법론을 제안해보겠다. 이 프레임워크는 지식을 바라볼 때 다음과 같은 축(axis)들을 고려하는 것을 권장한다. 이러한 축은 서로 교차하며, 각 축에 해당하는 질문을 통해 어떠한 분야나 사례에서도 지식의 형성을 분석할 수 있다.
1. 생산자·주체 축 (Actors/Agency)
• 누가 지식을 만드는가? (전문가, 비전문가, AI, 공동체, 문화권 등)
• 어떤 동기(경제적, 정치적, 호기심, 윤리적 신념)로 지식이 생산되는가?
• 개인적 직관, 집단적 대화, 기계학습 과정 등 어떤 인식 주체가 관여하는가?
2. 매개·표현 축 (Mediation/Representation)
• 지식은 어떠한 매체나 코드로 표현되는가? (언어, 수학 공식, 예술, 몸짓, 데이터베이스, 알고리즘)
• 어떠한 기술적·물질적 도구가 지식 전파나 변환에 기여하는가? (인터넷, 인쇄술, BCI, AR/VR)
• 해당 지식은 추상적 이론, 구체적 경험, 상징, 내러티브 등 어떤 형식으로 표현되는가?
3. 검증·정당화 축 (Validation/Legitimacy)
• 지식은 어떤 기준으로 ‘참’ 혹은 ‘유용’하다고 인정받는가?
• 검증 절차(피어 리뷰, 실험 재현, 시민 참여, AI 기반 검증, 전통적 권위)는 무엇인가?
• 어떤 문화적·법적·윤리적 규범이 지식 승인 과정에 영향을 미치는가?
4. 맥락·환경 축 (Context/Environment)
• 지식은 어떤 맥락 속에서 탄생하고 의미를 얻는가? (역사적, 문화적, 생태적, 경제적, 정치적 상황)
• 지식 생산이 특정 장소나 시간, 공동체, 생태계 조건에 따라 어떻게 달라지는가?
5. 목적·기능 축 (Purpose/Function)
• 이 지식은 무엇을 위해 사용되는가? 문제 해결, 정책 결정, 정체성 형성, 예측, 통제, 예술적 표현, 윤리적 판단 등 용도는 무엇인가?
• 지식이 미래지향적인가(장기 시나리오), 일시적 해결책인가, 혹은 의식·문화 형성에 기여하는가?
6. 변화·진화 축 (Dynamics/Transformation)
• 지식은 시간에 따라 어떻게 변하는가? (망각, 갱신, 혁명적 패러다임 전환)
• 새로운 증거나 기술의 등장, 환경 변화, 사회적 가치 전환이 지식을 어떻게 재구성하는가?
7. 관계·네트워크 축 (Relational/Network Perspective)
• 지식은 다양한 행위자와 정보원 사이 어떤 네트워크로 구성되는가?
• 네트워크 구조(허브, 클러스터, 브릿지)가 지식 확산 속도와 형태를 어떻게 좌우하는가?
적용 방법
1. 대상 설정: 어떤 특정 지식 영역(예: 전통 생태지식, AI 의료진단 지식, 특정 종교 교리, 과학 이론)이나 사례(예: 특정 위키 문서, 혁신 클러스터, 방대한 데이터셋)를 선택한다.
2. 질문 적용: 위의 7개 축을 기반으로 해당 지식에 다음과 같은 질문을 던진다.
• “이 지식을 누가 만드는가?”
• “어떤 매체를 통해 표현되는가?”
• “이 지식은 어떤 검증 과정을 거치며 권위를 획득하는가?”
• “어떤 문화·환경적 맥락에서 태어나고 유통되는가?”
• “이 지식은 어떤 목적(사회 문제 해결, 공감 형성, 경제적 이익)에 봉사하는가?”
• “시간이 지남에 따라 이 지식은 어떻게 바뀌는가?”
• “어떤 네트워크나 제도적 구조가 지식의 양상을 결정하는가?”
3. 통합적 분석: 축별 분석을 종합해보면, 지식이 단순히 ‘사실 덩어리’가 아니라 다면적이고 동적인 관계망임을 명확히 파악할 수 있다.
의의
이러한 프레임워크-방법론은 지식 연구에 다음과 같은 의의를 갖는다.
• 거시적 조망: 다양한 분야에서 포착된 지식 현상을 하나의 개념적 지도 위에 올려놓을 수 있다.
• 비교 가능성: 다른 영역의 지식 현상을 동일한 축의 질문으로 비교·대조할 수 있다.
• 변증법적 통찰: 상이한 축에서 드러난 갈등(예: 검증 기준 vs. 문화적 합의)을 통해 지식 형성의 복잡성을 이해한다.
• 통섭적 접근: 과학, 인문학, 예술, 기술, 생태, 정치, 경제 등 다양한 영역의 지식 현상을 교차 분석함으로써, 지식 생태계의 ‘융합적 이해’를 촉진한다.
요약하자면, 지금까지 제시한 다양한 분야들은 지식의 풍부한 측면들을 드러내기 위한 사례들이었다. 이제 이를 바탕으로 프레임워크-방법론을 제안함으로써, 어떤 분야나 상황에서도 지식에 대한 체계적이고 심층적인 분석이 가능해진다.
0 notes
Text
인류가 지식을 축적해온 역사는 단순한 정보 축적 이상의 의미를 갖는다. 우리는 역사, 과학, 예술, 철학, 문화 전반에서 다양한 사실과 관점을 익히고, 그를 통해 새로운 통찰을 얻는다. 이 과정은 지식이 고정된 진리라기보다 끊임없이 변화하고 해석되는 살아있는 흐름임을 보여준다.
지식은 맥락과 불가분하다. 같은 사실이라도 문화, 시대, 언어에 따라 전혀 다른 의미를 지닐 수 있으며, 새로운 증거나 관점에 따라 언제든 갱신되고 재평가된다. 이처럼 지식은 특정 시공간 안에서 공동체적 대화와 비판, 해석을 거치며 형성되는 사회적 산물이다. 이는 지식을 추구하는 이들에게 겸손과 유연성, 비판적 사고를 요구한다. 기존 패러다임을 절대시하기보다, 새로운 물음과 관점을 향해 문을 열고 끊임없이 성찰할 때 지식은 심화되고 확장된다.
또한 지식은 인간성(humanness)과 분리될 수 없다. 이성적·논리적 분석만이 아니라 감정, 직관, 상상력, 예술적 표현, 윤리적 선택, 공감 능력 등 전인적인 인간 특성들이 지식 생산과 해석에 관여한다. 이는 지식 활동이 단순한 머릿속 연산이나 정보 소비를 넘어, ‘어떻게 살 것인가’라는 실존적 질문과 맞닿아 있음을 뜻한다. 우리가 아는 것을 어떻게 실천할지, 다른 이들과의 관계 속에서 어떤 책임과 윤리를 발휘할지에 따라 지식의 가치는 달라진다.
결국 지식 추구의 본질은 ‘무엇을 많이 아느냐’가 아니라 ‘어떻게 이해하고 살아갈 것인가’라는 근본적이고 실천적인 물음에 응답하는 과정이다. 끊임없이 변화하는 세계 속에서 우리는 정보의 홍수나 불확실성 앞에 선다. 그 속에서 우리에게 필요한 것은 단편적 사실을 더하는 것이 아니라, 의미를 찾아내고 관계를 맺으며, 더 나은 삶을 설계하고 공동선(共同善)을 추구하는 통합적 태도다. 지식은 이 여정의 나침반이 될 수 있다. 하지만 그 나침반이 가리키는 방향을 따라 항해하는 것은 결국 인간 개개인의 선택이며, 이를 통해 우리는 지식이라는 유기적 생태계를 더욱 풍성하고 의미 있게 가꾸어갈 수 있다.
1 note
·
View note
Text
Machines
of
Loving Grace1
How AI Could Transform the World for the Better
October 2024
I think and talk a lot about the risks of powerful AI. The company I’m the CEO of, Anthropic, does a lot of research on how to reduce these risks. Because of this, people sometimes draw the conclusion that I’m a pessimist or “doomer” who thinks AI will be mostly bad or dangerous. I don’t think that at all. In fact, one of my main reasons for focusing on risks is that they’re the only thing standing between us and what I see as a fundamentally positive future. I think that most people are underestimating just how radical the upside of AI could be, just as I think most people are underestimating how bad the risks could be.
In this essay I try to sketch out what that upside might look like—what a world with powerful AI might look like if everything goes right. Of course no one can know the future with any certainty or precision, and the effects of powerful AI are likely to be even more unpredictable than past technological changes, so all of this is unavoidably going to consist of guesses. But I am aiming for at least educated and useful guesses, which capture the flavor of what will happen even if most details end up being wrong. I’m including lots of details mainly because I think a concrete vision does more to advance discussion than a highly hedged and abstract one.
First, however, I wanted to briefly explain why I and Anthropic haven’t talked that much about powerful AI’s upsides, and why we’ll probably continue, overall, to talk a lot about risks. In particular, I’ve made this choice out of a desire to:
Maximize leverage. The basic development of AI technology and many (not all) of its benefits seems inevitable (unless the risks derail everything) and is fundamentally driven by powerful market forces. On the other hand, the risks are not predetermined and our actions can greatly change their likelihood.
Avoid perception of propaganda. AI companies talking about all the amazing benefits of AI can come off like propagandists, or as if they’re attempting to distract from downsides. I also think that as a matter of principle it’s bad for your soul to spend too much of your time “talking your book”.
Avoid grandiosity. I am often turned off by the way many AI risk public figures (not to mention AI company leaders) talk about the post-AGI world, as if it’s their mission to single-handedly bring it about like a prophet leading their people to salvation. I think it’s dangerous to view companies as unilaterally shaping the world, and dangerous to view practical technological goals in essentially religious terms.
Avoid “sci-fi” baggage. Although I think most people underestimate the upside of powerful AI, the small community of people who do discuss radical AI futures often does so in an excessively “sci-fi” tone (featuring e.g. uploaded minds, space exploration, or general cyberpunk vibes). I think this causes people to take the claims less seriously, and to imbue them with a sort of unreality. To be clear, the issue isn’t whether the technologies described are possible or likely (the main essay discusses this in granular detail)—it’s more that the “vibe” connotatively smuggles in a bunch of cultural baggage and unstated assumptions about what kind of future is desirable, how various societal issues will play out, etc. The result often ends up reading like a fantasy for a narrow subculture, while being off-putting to most people.
Yet despite all of the concerns above, I really do think it’s important to discuss what a good world with powerful AI could look like, while doing our best to avoid the above pitfalls. In fact I think it is critical to have a genuinely inspiring vision of the future, and not just a plan to fight fires. Many of the implications of powerful AI are adversarial or dangerous, but at the end of it all, there has to be something we’re fighting for, some positive-sum outcome where everyone is better off, something to rally people to rise above their squabbles and confront the challenges ahead. Fear is one kind of motivator, but it’s not enough: we need hope as well.
The list of positive applications of powerful AI is extremely long (and includes robotics, manufacturing, energy, and much more), but I’m going to focus on a small number of areas that seem to me to have the greatest potential to directly improve the quality of human life. The five categories I am most excited about are:
Biology and physical health
Neuroscience and mental health
Economic development and poverty
Peace and governance
Work and meaning
My predictions are going to be radical as judged by most standards (other than sci-fi “singularity” visions2), but I mean them earnestly and sincerely. Everything I’m saying could very easily be wrong (to repeat my point from above), but I’ve at least attempted to ground my views in a semi-analytical assessment of how much progress in various fields might speed up and what that might mean in practice. I am fortunate to have professional experience in both biology and neuroscience, and I am an informed amateur in the field of economic development, but I am sure I will get plenty of things wrong. One thing writing this essay has made me realize is that it would be valuable to bring together a group of domain experts (in biology, economics, international relations, and other areas) to write a much better and more informed version of what I’ve produced here. It’s probably best to view my efforts here as a starting prompt for that group.
Basic assumptions and framework
To make this whole essay more precise and grounded, it’s helpful to specify clearly what we mean by powerful AI (i.e. the threshold at which the 5-10 year clock starts counting), as well as laying out a framework for thinking about the effects of such AI once it’s present.
What powerful AI (I dislike the term AGI)3 will look like, and when (or if) it will arrive, is a huge topic in itself. It’s one I’ve discussed publicly and could write a completely separate essay on (I probably will at some point). Obviously, many people are skeptical that powerful AI will be built soon and some are skeptical that it will ever be built at all. I think it could come as early as 2026, though there are also ways it could take much longer. But for the purposes of this essay, I’d like to put these issues aside, assume it will come reasonably soon, and focus on what happens in the 5-10 years after that. I also want to assume a definition of what such a system will look like, what its capabilities are and how it interacts, even though there is room for disagreement on this.
By powerful AI, I have in mind an AI model—likely similar to today’s LLM’s in form, though it might be based on a different architecture, might involve several interacting models, and might be trained differently—with the following properties:
In terms of pure intelligence4, it is smarter than a Nobel Prize winner across most relevant fields – biology, programming, math, engineering, writing, etc. This means it can prove unsolved mathematical theorems, write extremely good novels, write difficult codebases from scratch, etc.
In addition to just being a “smart thing you talk to”, it has all the “interfaces” available to a human working virtually, including text, audio, video, mouse and keyboard control, and internet access. It can engage in any actions, communications, or remote operations enabled by this interface, including taking actions on the internet, taking or giving directions to humans, ordering materials, directing experiments, watching videos, making videos, and so on. It does all of these tasks with, again, a skill exceeding that of the most capable humans in the world.
It does not just passively answer questions; instead, it can be given tasks that take hours, days, or weeks to complete, and then goes off and does those tasks autonomously, in the way a smart employee would, asking for clarification as necessary.
It does not have a physical embodiment (other than living on a computer screen), but it can control existing physical tools, robots, or laboratory equipment through a computer; in theory it could even design robots or equipment for itself to use.
The resources used to train the model can be repurposed to run millions of instances of it (this matches projected cluster sizes by ~2027), and the model can absorb information and generate actions at roughly 10x-100x human speed5. It may however be limited by the response time of the physical world or of software it interacts with.
Each of these million copies can act independently on unrelated tasks, or if needed can all work together in the same way humans would collaborate, perhaps with different subpopulations fine-tuned to be especially good at particular tasks.
We could summarize this as a “country of geniuses in a datacenter”.
Clearly such an entity would be capable of solving very difficult problems, very fast, but it is not trivial to figure out how fast. Two “extreme” positions both seem false to me. First, you might think that the world would be instantly transformed on the scale of seconds or days (“the Singularity”), as superior intelligence builds on itself and solves every possible scientific, engineering, and operational task almost immediately. The problem with this is that there are real physical and practical limits, for example around building hardware or conducting biological experiments. Even a new country of geniuses would hit up against these limits. Intelligence may be very powerful, but it isn’t magic fairy dust.
Second, and conversely, you might believe that technological progress is saturated or rate-limited by real world data or by social factors, and that better-than-human intelligence will add very little6. This seems equally implausible to me—I can think of hundreds of scientific or even social problems where a large group of really smart people would drastically speed up progress, especially if they aren’t limited to analysis and can make things happen in the real world (which our postulated country of geniuses can, including by directing or assisting teams of humans).
I think the truth is likely to be some messy admixture of these two extreme pictures, something that varies by task and field and is very subtle in its details. I believe we need new frameworks to think about these details in a productive way.
Economists often talk about “factors of production”: things like labor, land, and capital. The phrase “marginal returns to labor/land/capital” captures the idea that in a given situation, a given factor may or may not be the limiting one – for example, an air force needs both planes and pilots, and hiring more pilots doesn’t help much if you’re out of planes. I believe that in the AI age, we should be talking about the marginal returns to intelligence7, and trying to figure out what the other factors are that are complementary to intelligence and that become limiting factors when intelligence is very high. We are not used to thinking in this way—to asking “how much does being smarter help with this task, and on what timescale?”—but it seems like the right way to conceptualize a world with very powerful AI.
My guess at a list of factors that limit or are complementary to intelligence includes:
Speed of the outside world. Intelligent agents need to operate interactively in the world in order to accomplish things and also to learn8. But the world only moves so fast. Cells and animals run at a fixed speed so experiments on them take a certain amount of time which may be irreducible. The same is true of hardware, materials science, anything involving communicating with people, and even our existing software infrastructure. Furthermore, in science many experiments are often needed in sequence, each learning from or building on the last. All of this means that the speed at which a major project—for example developing a cancer cure—can be completed may have an irreducible minimum that cannot be decreased further even as intelligence continues to increase.
Need for data. Sometimes raw data is lacking and in its absence more intelligence does not help. Today’s particle physicists are very ingenious and have developed a wide range of theories, but lack the data to choose between them because particle accelerator data is so limited. It is not clear that they would do drastically better if they were superintelligent—other than perhaps by speeding up the construction of a bigger accelerator.
Intrinsic complexity. Some things are inherently unpredictable or chaotic and even the most powerful AI cannot predict or untangle them substantially better than a human or a computer today. For example, even incredibly powerful AI could predict only marginally further ahead in a chaotic system (such as the three-body problem) in the general case,9 as compared to today’s humans and computers.
Constraints from humans. Many things cannot be done without breaking laws, harming humans, or messing up society. An aligned AI would not want to do these things (and if we have an unaligned AI, we’re back to talking about risks). Many human societal structures are inefficient or even actively harmful, but are hard to change while respecting constraints like legal requirements on clinical trials, people’s willingness to change their habits, or the behavior of governments. Examples of advances that work well in a technical sense, but whose impact has been substantially reduced by regulations or misplaced fears, include nuclear power, supersonic flight, and even elevators.
Physical laws. This is a starker version of the first point. There are certain physical laws that appear to be unbreakable. It’s not possible to travel faster than light. Pudding does not unstir. Chips can only have so many transistors per square centimeter before they become unreliable. Computation requires a certain minimum energy per bit erased, limiting the density of computation in the world.
There is a further distinction based on timescales. Things that are hard constraints in the short run may become more malleable to intelligence in the long run. For example, intelligence might be used to develop a new experimental paradigm that allows us to learn in vitro what used to require live animal experiments, or to build the tools needed to collect new data (e.g. the bigger particle accelerator), or to (within ethical limits) find ways around human-based constraints (e.g. helping to improve the clinical trial system, helping to create new jurisdictions where clinical trials have less bureaucracy, or improving the science itself to make human clinical trials less necessary or cheaper).
Thus, we should imagine a picture where intelligence is initially heavily bottlenecked by the other factors of production, but over time intelligence itself increasingly routes around the other factors, even if they never fully dissolve (and some things like physical laws are absolute)10. The key question is how fast it all happens and in what order.
With the above framework in mind, I’ll try to answer that question for the five areas mentioned in the introduction.
1. Biology and health
Biology is probably the area where scientific progress has the greatest potential to directly and unambiguously improve the quality of human life. In the last century some of the most ancient human afflictions (such as smallpox) have finally been vanquished, but many more still remain, and defeating them would be an enormous humanitarian accomplishment. Beyond even curing disease, biological science can in principle improve the baseline quality of human health, by extending the healthy human lifespan, increasing control and freedom over our own biological processes, and addressing everyday problems that we currently think of as immutable parts of the human condition.
In the “limiting factors” language of the previous section, the main challenges with directly applying intelligence to biology are data, the speed of the physical world, and intrinsic complexity (in fact, all three are related to each other). Human constraints also play a role at a later stage, when clinical trials are involved. Let’s take these one by one.
Experiments on cells, animals, and even chemical processes are limited by the speed of the physical world: many biological protocols involve culturing bacteria or other cells, or simply waiting for chemical reactions to occur, and this can sometimes take days or even weeks, with no obvious way to speed it up. Animal experiments can take months (or more) and human experiments often take years (or even decades for long-term outcome studies). Somewhat related to this, data is often lacking—not so much in quantity, but quality: there is always a dearth of clear, unambiguous data that isolates a biological effect of interest from the other 10,000 confounding things that are going on, or that intervenes causally in a given process, or that directly measures some effect (as opposed to inferring its consequences in some indirect or noisy way). Even massive, quantitative molecular data, like the proteomics data that I collected while working on mass spectrometry techniques, is noisy and misses a lot (which types of cells were these proteins in? Which part of the cell? At what phase in the cell cycle?).
In part responsible for these problems with data is intrinsic complexity: if you’ve ever seen a diagram showing the biochemistry of human metabolism, you’ll know that it’s very hard to isolate the effect of any part of this complex system, and even harder to intervene on the system in a precise or predictable way. And finally, beyond just the intrinsic time that it takes to run an experiment on humans, actual clinical trials involve a lot of bureaucracy and regulatory requirements that (in the opinion of many people, including me) add unnecessary additional time and delay progress.
Given all this, many biologists have long been skeptical of the value of AI and “big data” more generally in biology. Historically, mathematicians, computer scientists, and physicists who have applied their skills to biology over the last 30 years have been quite successful, but have not had the truly transformative impact initially hoped for. Some of the skepticism has been reduced by major and revolutionary breakthroughs like AlphaFold (which has just deservedly won its creators the Nobel Prize in Chemistry) and AlphaProteo11, but there’s still a perception that AI is (and will continue to be) useful in only a limited set of circumstances. A common formulation is “AI can do a better job analyzing your data, but it can’t produce more data or improve the quality of the data. Garbage in, garbage out”.
But I think that pessimistic perspective is thinking about AI in the wrong way. If our core hypothesis about AI progress is correct, then the right way to think of AI is not as a method of data analysis, but as a virtual biologist who performs all the tasks biologists do, including designing and running experiments in the real world (by controlling lab robots or simply telling humans which experiments to run – as a Principal Investigator would to their graduate students), inventing new biological methods or measurement techniques, and so on. It is by speeding up the whole research process that AI can truly accelerate biology. I want to repeat this because it’s the most common misconception that comes up when I talk about AI’s ability to transform biology: I am not talking about AI as merely a tool to analyze data. In line with the definition of powerful AI at the beginning of this essay, I’m talking about using AI to perform, direct, and improve upon nearly everything biologists do.
To get more specific on where I think acceleration is likely to come from, a surprisingly large fraction of the progress in biology has come from a truly tiny number of discoveries, often related to broad measurement tools or techniques12 that allow precise but generalized or programmable intervention in biological systems. There’s perhaps ~1 of these major discoveries per year and collectively they arguably drive >50% of progress in biology. These discoveries are so powerful precisely because they cut through intrinsic complexity and data limitations, directly increasing our understanding and control over biological processes. A few discoveries per decade have enabled both the bulk of our basic scientific understanding of biology, and have driven many of the most powerful medical treatments.
Some examples include:
CRISPR: a technique that allows live editing of any gene in living organisms (replacement of any arbitrary gene sequence with any other arbitrary sequence). Since the original technique was developed, there have been constant improvements to target specific cell types, increasing accuracy, and reducing edits of the wrong gene—all of which are needed for safe use in humans.
Various kinds of microscopy for watching what is going on at a precise level: advanced light microscopes (with various kinds of fluorescent techniques, special optics, etc), electron microscopes, atomic force microscopes, etc.
Genome sequencing and synthesis, which has dropped in cost by several orders of magnitude in the last couple decades.
Optogenetic techniques that allow you to get a neuron to fire by shining a light on it.
mRNA vaccines that, in principle, allow us to design a vaccine against anything and then quickly adapt it (mRNA vaccines of course became famous during COVID).
Cell therapies such as CAR-T that allow immune cells to be taken out of the body and “reprogrammed” to attack, in principle, anything.
Conceptual insights like the germ theory of disease or the realization of a link between the immune system and cancer13.
I’m going to the trouble of listing all these technologies because I want to make a crucial claim about them: I think their rate of discovery could be increased by 10x or more if there were a lot more talented, creative researchers. Or, put another way, I think the returns to intelligence are high for these discoveries, and that everything else in biology and medicine mostly follows from them.
Why do I think this? Because of the answers to some questions that we should get in the habit of asking when we’re trying to determine “returns to intelligence”. First, these discoveries are generally made by a tiny number of researchers, often the same people repeatedly, suggesting skill and not random search (the latter might suggest lengthy experiments are the limiting factor). Second, they often “could have been made” years earlier than they were: for example, CRISPR was a naturally occurring component of the immune system in bacteria that’s been known since the 80’s, but it took another 25 years for people to realize it could be repurposed for general gene editing. They also are often delayed many years by lack of support from the scientific community for promising directions (see this profile on the inventor of mRNA vaccines; similar stories abound). Third, successful projects are often scrappy or were afterthoughts that people didn’t initially think were promising, rather than massively funded efforts. This suggests that it’s not just massive resource concentration that drives discoveries, but ingenuity.
Finally, although some of these discoveries have “serial dependence” (you need to make discovery A first in order to have the tools or knowledge to make discovery B)—which again might create experimental delays—many, perhaps most, are independent, meaning many at once can be worked on in parallel. Both these facts, and my general experience as a biologist, strongly suggest to me that there are hundreds of these discoveries waiting to be made if scientists were smarter and better at making connections between the vast amount of biological knowledge humanity possesses (again consider the CRISPR example). The success of AlphaFold/AlphaProteo at solving important problems much more effectively than humans, despite decades of carefully designed physics modeling, provides a proof of principle (albeit with a narrow tool in a narrow domain) that should point the way forward.
Thus, it’s my guess that powerful AI could at least 10x the rate of these discoveries, giving us the next 50-100 years of biological progress in 5-10 years.14 Why not 100x? Perhaps it is possible, but here both serial dependence and experiment times become important: getting 100 years of progress in 1 year requires a lot of things to go right the first time, including animal experiments and things like designing microscopes or expensive lab facilities. I’m actually open to the (perhaps absurd-sounding) idea that we could get 1000 years of progress in 5-10 years, but very skeptical that we can get 100 years in 1 year. Another way to put it is I think there’s an unavoidable constant delay: experiments and hardware design have a certain “latency” and need to be iterated upon a certain “irreducible” number of times in order to learn things that can’t be deduced logically. But massive parallelism may be possible on top of that15.
What about clinical trials? Although there is a lot of bureaucracy and slowdown associated with them, the truth is that a lot (though by no means all!) of their slowness ultimately derives from the need to rigorously evaluate drugs that barely work or ambiguously work. This is sadly true of most therapies today: the average cancer drug increases survival by a few months while having significant side effects that need to be carefully measured (there’s a similar story for Alzheimer’s drugs). This leads to huge studies (in order to achieve statistical power) and difficult tradeoffs which regulatory agencies generally aren’t great at making, again because of bureaucracy and the complexity of competing interests.
When something works really well, it goes much faster: there’s an accelerated approval track and the ease of approval is much greater when effect sizes are larger. mRNA vaccines for COVID were approved in 9 months—much faster than the usual pace. That said, even under these conditions clinical trials are still too slow—mRNA vaccines arguably should have been approved in ~2 months. But these kinds of delays (~1 year end-to-end for a drug) combined with massive parallelization and the need for some but not too much iteration (“a few tries”) are very compatible with radical transformation in 5-10 years. Even more optimistically, it is possible that AI-enabled biological science will reduce the need for iteration in clinical trials by developing better animal and cell experimental models (or even simulations) that are more accurate in predicting what will happen in humans. This will be particularly important in developing drugs against the aging process, which plays out over decades and where we need a faster iteration loop.
Finally, on the topic of clinical trials and societal barriers, it is worth pointing out explicitly that in some ways biomedical innovations have an unusually strong track record of being successfully deployed, in contrast to some other technologies16. As mentioned in the introduction, many technologies are hampered by societal factors despite working well technically. This might suggest a pessimistic perspective on what AI can accomplish. But biomedicine is unique in that although the process of developing drugs is overly cumbersome, once developed they generally are successfully deployed and used.
To summarize the above, my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years. I’ll refer to this as the “compressed 21st century”: the idea that after powerful AI is developed, we will in a few years make all the progress in biology and medicine that we would have made in the whole 21st century.
Although predicting what powerful AI can do in a few years remains inherently difficult and speculative, there is some concreteness to asking “what could humans do unaided in the next 100 years?”. Simply looking at what we’ve accomplished in the 20th century, or extrapolating from the first 2 decades of the 21st, or asking what “10 CRISPR’s and 50 CAR-T’s” would get us, all offer practical, grounded ways to estimate the general level of progress we might expect from powerful AI.
Below I try to make a list of what we might expect. This is not based on any rigorous methodology, and will almost certainly prove wrong in the details, but it’s trying to get across the general level of radicalism we should expect:
Reliable prevention and treatment of nearly all17 natural infectious disease. Given the enormous advances against infectious disease in the 20th century, it is not radical to imagine that we could more or less “finish the job” in a compressed 21st. mRNA vaccines and similar technology already point the way towards “vaccines for anything”. Whether infectious disease is fully eradicated from the world (as opposed to just in some places) depends on questions about poverty and inequality, which are discussed in Section 3.
Elimination of most cancer. Death rates from cancer have been dropping ~2% per year for the last few decades; thus we are on track to eliminate most cancer in the 21st century at the current pace of human science. Some subtypes have already been largely cured (for example some types of leukemia with CAR-T therapy), and I’m perhaps even more excited for very selective drugs that target cancer in its infancy and prevent it from ever growing. AI will also make possible treatment regimens very finely adapted to the individualized genome of the cancer—these are possible today, but hugely expensive in time and human expertise, which AI should allow us to scale. Reductions of 95% or more in both mortality and incidence seem possible. That said, cancer is extremely varied and adaptive, and is likely the hardest of these diseases to fully destroy. It would not be surprising if an assortment of rare, difficult malignancies persists.
Very effective prevention and effective cures for genetic disease. Greatly improved embryo screening will likely make it possible to prevent most genetic disease, and some safer, more reliable descendant of CRISPR may cure most genetic disease in existing people. Whole-body afflictions that affect a large fraction of cells may be the last holdouts, however.
Prevention of Alzheimer’s. We’ve had a very hard time figuring out what causes Alzheimer’s (it is somehow related to beta-amyloid protein, but the actual details seem to be very complex). It seems like exactly the type of problem that can be solved with better measurement tools that isolate biological effects; thus I am bullish about AI’s ability to solve it. There is a good chance it can eventually be prevented with relatively simple interventions, once we actually understand what is going on. That said, damage from already-existing Alzheimer’s may be very difficult to reverse.
Improved treatment of most other ailments. This is a catch-all category for other ailments including diabetes, obesity, heart disease, autoimmune diseases, and more. Most of these seem “easier” to solve than cancer and Alzheimer’s and in many cases are already in steep decline. For example, deaths from heart disease have already declined over 50%, and simple interventions like GLP-1 agonists have already made huge progress against obesity and diabetes.
Biological freedom. The last 70 years featured advances in birth control, fertility, management of weight, and much more. But I suspect AI-accelerated biology will greatly expand what is possible: weight, physical appearance, reproduction, and other biological processes will be fully under people’s control. We’ll refer to these under the heading of biological freedom: the idea that everyone should be empowered to choose what they want to become and live their lives in the way that most appeals to them. There will of course be important questions about global equality of access; see Section 3 for these.
Doubling of the human lifespan18. This might seem radical, but life expectancy increased almost 2x in the 20th century (from ~40 years to ~75), so it’s “on trend” that the “compressed 21st” would double it again to 150. Obviously the interventions involved in slowing the actual aging process will be different from those that were needed in the last century to prevent (mostly childhood) premature deaths from disease, but the magnitude of change is not unprecedented19. Concretely, there already exist drugs that increase maximum lifespan in rats by 25-50% with limited ill-effects. And some animals (e.g. some types of turtle) already live 200 years, so humans are manifestly not at some theoretical upper limit. At a guess, the most important thing that is needed might be reliable, non-Goodhart-able biomarkers of human aging, as that will allow fast iteration on experiments and clinical trials. Once human lifespan is 150, we may be able to reach “escape velocity”, buying enough time that most of those currently alive today will be able to live as long as they want, although there’s certainly no guarantee this is biologically possible.
It is worth looking at this list and reflecting on how different the world will be if all of it is achieved 7-12 years from now (which would be in line with an aggressive AI timeline). It goes without saying that it would be an unimaginable humanitarian triumph, the elimination all at once of most of the scourges that have haunted humanity for millennia. Many of my friends and colleagues are raising children, and when those children grow up, I hope that any mention of disease will sound to them the way scurvy, smallpox, or bubonic plague sounds to us. That generation will also benefit from increased biological freedom and self-expression, and with luck may also be able to live as long as they want.
It’s hard to overestimate how surprising these changes will be to everyone except the small community of people who expected powerful AI. For example, thousands of economists and policy experts in the US currently debate how to keep Social Security and Medicare solvent, and more broadly how to keep down the cost of healthcare (which is mostly consumed by those over 70 and especially those with terminal illnesses such as cancer). The situation for these programs is likely to be radically improved if all this comes to pass20, as the ratio of working age to retired population will change drastically. No doubt these challenges will be replaced with others, such as how to ensure widespread access to the new technologies, but it is worth reflecting on how much the world will change even if biology is the only area to be successfully accelerated by AI.
2. Neuroscience and mind
In the previous section I focused on physical diseases and biology in general, and didn’t cover neuroscience or mental health. But neuroscience is a subdiscipline of biology and mental health is just as important as physical health. In fact, if anything, mental health affects human well-being even more directly than physical health. Hundreds of millions of people have very low quality of life due to problems like addiction, depression, schizophrenia, low-functioning autism, PTSD, psychopathy21, or intellectual disabilities. Billions more struggle with everyday problems that can often be interpreted as much milder versions of one of these severe clinical disorders. And as with general biology, it may be possible to go beyond addressing problems to improving the baseline quality of human experience.
The basic framework that I laid out for biology applies equally to neuroscience. The field is propelled forward by a small number of discoveries often related to tools for measurement or precise intervention – in the list of those above, optogenetics was a neuroscience discovery, and more recently CLARITY and expansion microscopy are advances in the same vein, in addition to many of the general cell biology methods directly carrying over to neuroscience. I think the rate of these advances will be similarly accelerated by AI and therefore that the framework of “100 years of progress in 5-10 years” applies to neuroscience in the same way it does to biology and for the same reasons. As in biology, the progress in 20th century neuroscience was enormous – for example we didn’t even understand how or why neurons fired until the 1950’s. Thus, it seems reasonable to expect AI-accelerated neuroscience to produce rapid progress over a few years.
There is one thing we should add to this basic picture, which is that some of the things we’ve learned (or are learning) about AI itself in the last few years are likely to help advance neuroscience, even if it continues to be done only by humans. Interpretability is an obvious example: although biological neurons superficially operate in a completely different manner from artificial neurons (they communicate via spikes and often spike rates, so there is a time element not present in artificial neurons, and a bunch of details relating to cell physiology and neurotransmitters modifies their operation substantially), the basic question of “how do distributed, trained networks of simple units that perform combined linear/non-linear operations work together to perform important computations” is the same, and I strongly suspect the details of individual neuron communication will be abstracted away in most of the interesting questions about computation and circuits22. As just one example of this, a computational mechanism discovered by interpretability researchers in AI systems was recently rediscovered in the brains of mice.
It is much easier to do experiments on artificial neural networks than on real ones (the latter often requires cutting into animal brains), so interpretability may well become a tool for improving our understanding of neuroscience. Furthermore, powerful AI’s will themselves probably be able to develop and apply this tool better than humans can.
Beyond just interpretability though, what we have learned from AI about how intelligent systems are trained should (though I am not sure it has yet) cause a revolution in neuroscience. When I was working in neuroscience, a lot of people focused on what I would now consider the wrong questions about learning, because the concept of the scaling hypothesis / bitter lesson didn’t exist yet. The idea that a simple objective function plus a lot of data can drive incredibly complex behaviors makes it more interesting to understand the objective functions and architectural biases and less interesting to understand the details of the emergent computations. I have not followed the field closely in recent years, but I have a vague sense that computational neuroscientists have still not fully absorbed the lesson. My attitude to the scaling hypothesis has always been “aha – this is an explanation, at a high level, of how intelligence works and how it so easily evolved”, but I don’t think that’s the average neuroscientist’s view, in part because the scaling hypothesis as “the secret to intelligence” isn’t fully accepted even within AI.
I think that neuroscientists should be trying to combine this basic insight with the particularities of the human brain (biophysical limitations, evolutionary history, topology, details of motor and sensory inputs/outputs) to try to figure out some of neuroscience’s key puzzles. Some likely are, but I suspect it’s not enough yet, and that AI neuroscientists will be able to more effectively leverage this angle to accelerate progress.
I expect AI to accelerate neuroscientific progress along four distinct routes, all of which can hopefully work together to cure mental illness and improve function:
Traditional molecular biology, chemistry, and genetics. This is essentially the same story as general biology in section 1, and AI can likely speed it up via the same mechanisms. There are many drugs that modulate neurotransmitters in order to alter brain function, affect alertness or perception, change mood, etc., and AI can help us invent many more. AI can probably also accelerate research on the genetic basis of mental illness.
Fine-grained neural measurement and intervention. This is the ability to measure what a lot of individual neurons or neuronal circuits are doing, and intervene to change their behavior. Optogenetics and neural probes are technologies capable of both measurement and intervention in live organisms, and a number of very advanced methods (such as molecular ticker tapes to read out the firing patterns of large numbers of individual neurons) have also been proposed and seem possible in principle.
Advanced computational neuroscience. As noted above, both the specific insights and the gestalt of modern AI can probably be applied fruitfully to questions in systems neuroscience, including perhaps uncovering the real causes and dynamics of complex diseases like psychosis or mood disorders.
Behavioral interventions. I haven’t much mentioned it given the focus on the biological side of neuroscience, but psychiatry and psychology have of course developed a wide repertoire of behavioral interventions over the 20th century; it stands to reason that AI could accelerate these as well, both the development of new methods and helping patients to adhere to existing methods. More broadly, the idea of an “AI coach” who always helps you to be the best version of yourself, who studies your interactions and helps you learn to be more effective, seems very promising.
It’s my guess that these four routes of progress working together would, as with physical disease, be on track to lead to the cure or prevention of most mental illness in the next 100 years even if AI was not involved – and thus might reasonably be completed in 5-10 AI-accelerated years. Concretely my guess at what will happen is something like:
Most mental illness can probably be cured. I’m not an expert in psychiatric disease (my time in neuroscience was spent building probes to study small groups of neurons) but it’s my guess that diseases like PTSD, depression, schizophrenia, addiction, etc. can be figured out and very effectively treated via some combination of the four directions above. The answer is likely to be some combination of “something went wrong biochemically” (although it could be very complex) and “something went wrong with the neural network, at a high level”. That is, it’s a systems neuroscience question—though that doesn’t gainsay the impact of the behavioral interventions discussed above. Tools for measurement and intervention, especially in live humans, seem likely to lead to rapid iteration and progress.
Conditions that are very “structural” may be more difficult, but not impossible. There’s some evidence that psychopathy is associated with obvious neuroanatomical differences – that some brain regions are simply smaller or less developed in psychopaths. Psychopaths are also believed to lack empathy from a young age; whatever is different about their brain, it was probably always that way. The same may be true of some intellectual disabilities, and perhaps other conditions. Restructuring the brain sounds hard, but it also seems like a task with high returns to intelligence. Perhaps there is some way to coax the adult brain into an earlier or more plastic state where it can be reshaped. I’m very uncertain how possible this is, but my instinct is to be optimistic about what AI can invent here.
Effective genetic prevention of mental illness seems possible. Most mental illness is partially heritable, and genome-wide association studies are starting to gain traction on identifying the relevant factors, which are often many in number. It will probably be possible to prevent most of these diseases via embryo screening, similar to the story with physical disease. One difference is that psychiatric disease is more likely to be polygenic (many genes contribute), so due to complexity there’s an increased risk of unknowingly selecting against positive traits that are correlated with disease. Oddly however, in recent years GWAS studies seem to suggest that these correlations might have been overstated. In any case, AI-accelerated neuroscience may help us to figure these things out. Of course, embryo screening for complex traits raises a number of societal issues and will be controversial, though I would guess that most people would support screening for severe or debilitating mental illness.
Everyday problems that we don’t think of as clinical disease will also be solved. Most of us have everyday psychological problems that are not ordinarily thought of as rising to the level of clinical disease. Some people are quick to anger, others have trouble focusing or are often drowsy, some are fearful or anxious, or react badly to change. Today, drugs already exist to help with e.g. alertness or focus (caffeine, modafinil, ritalin) but as with many other previous areas, much more is likely to be possible. Probably many more such drugs exist and have not been discovered, and there may also be totally new modalities of intervention, such as targeted light stimulation (see optogenetics above) or magnetic fields. Given how many drugs we’ve developed in the 20th century that tune cognitive function and emotional state, I’m very optimistic about the “compressed 21st” where everyone can get their brain to behave a bit better and have a more fulfilling day-to-day experience.
Human baseline experience can be much better. Taking one step further, many people have experienced extraordinary moments of revelation, creative inspiration, compassion, fulfillment, transcendence, love, beauty, or meditative peace. The character and frequency of these experiences differs greatly from person to person and within the same person at different times, and can also sometimes be triggered by various drugs (though often with side effects). All of this suggests that the “space of what is possible to experience” is very broad and that a larger fraction of people’s lives could consist of these extraordinary moments. It is probably also possible to improve various cognitive functions across the board. This is perhaps the neuroscience version of “biological freedom” or “extended lifespans”.
One topic that often comes up in sci-fi depictions of AI, but that I intentionally haven’t discussed here, is “mind uploading”, the idea of capturing the pattern and dynamics of a human brain and instantiating them in software. This topic could be the subject of an essay all by itself, but suffice it to say that while I think uploading is almost certainly possible in principle, in practice it faces significant technological and societal challenges, even with powerful AI, that likely put it outside the 5-10 year window we are discussing.
In summary, AI-accelerated neuroscience is likely to vastly improve treatments for, or even cure, most mental illness as well as greatly expand “cognitive and mental freedom” and human cognitive and emotional abilities. It will be every bit as radical as the improvements in physical health described in the previous section. Perhaps the world will not be visibly different on the outside, but the world as experienced by humans will be a much better and more humane place, as well as a place that offers greater opportunities for self-actualization. I also suspect that improved mental health will ameliorate a lot of other societal problems, including ones that seem political or economic.
3. Economic development and poverty
The previous two sections are about developing new technologies that cure disease and improve the quality of human life. However an obvious question, from a humanitarian perspective, is: “will everyone have access to these technologies?”
It is one thing to develop a cure for a disease, it is another thing to eradicate the disease from the world. More broadly, many existing health interventions have not yet been applied everywhere in the world, and for that matter the same is true of (non-health) technological improvements in general. Another way to say this is that living standards in many parts of the world are still desperately poor: GDP per capita is ~$2,000 in Sub-Saharan Africa as compared to ~$75,000 in the United States. If AI further increases economic growth and quality of life in the developed world, while doing little to help the developing world, we should view that as a terrible moral failure and a blemish on the genuine humanitarian victories in the previous two sections. Ideally, powerful AI should help the developing world catch up to the developed world, even as it revolutionizes the latter.
I am not as confident that AI can address inequality and economic growth as I am that it can invent fundamental technologies, because technology has such obvious high returns to intelligence (including the ability to route around complexities and lack of data) whereas the economy involves a lot of constraints from humans, as well as a large dose of intrinsic complexity. I am somewhat skeptical that an AI could solve the famous “socialist calculation problem”23 and I don’t think governments will (or should) turn over their economic policy to such an entity, even if it could do so. There are also problems like how to convince people to take treatments that are effective but that they may be suspicious of.
The challenges facing the developing world are made even more complicated by pervasive corruption in both private and public sectors. Corruption creates a vicious cycle: it exacerbates poverty, and poverty in turn breeds more corruption. AI-driven plans for economic development need to reckon with corruption, weak institutions, and other very human challenges.
Nevertheless, I do see significant reasons for optimism. Diseases have been eradicated and many countries have gone from poor to rich, and it is clear that the decisions involved in these tasks exhibit high returns to intelligence (despite human constraints and complexity). Therefore, AI can likely do them better than they are currently being done. There may also be targeted interventions that get around the human constraints and that AI could focus on. More importantly though, we have to try. Both AI companies and developed world policymakers will need to do their part to ensure that the developing world is not left out; the moral imperative is too great. So in this section, I’ll continue to make the optimistic case, but keep in mind everywhere that success is not guaranteed and depends on our collective efforts.
Below I make some guesses about how I think things may go in the developing world over the 5-10 years after powerful AI is developed:
Distribution of health interventions. The area where I am perhaps most optimistic is distributing health interventions throughout the world. Diseases have actually been eradicated by top-down campaigns: smallpox was fully eliminated in the 1970’s, and polio and guinea worm are nearly eradicated with less than 100 cases per year. Mathematically sophisticated epidemiological modeling plays an active role in disease eradication campaigns, and it seems very likely that there is room for smarter-than-human AI systems to do a better job of it than humans are. The logistics of distribution can probably also be greatly optimized. One thing I learned as an early donor to GiveWell is that some health charities are way more effective than others; the hope is that AI-accelerated efforts would be more effective still. Additionally, some biological advances actually make the logistics of distribution much easier: for example, malaria has been difficult to eradicate because it requires treatment each time the disease is contracted; a vaccine that only needs to be administered once makes the logistics much simpler (and such vaccines for malaria are in fact currently being developed). Even simpler distribution mechanisms are possible: some diseases could in principle be eradicated by targeting their animal carriers, for example releasing mosquitoes infected with a bacterium that blocks their ability to carry a disease (who then infect all the other mosquitos) or simply using gene drives to wipe out the mosquitos. This requires one or a few centralized actions, rather than a coordinated campaign that must individually treat millions. Overall, I think 5-10 years is a reasonable timeline for a good fraction (maybe 50%) of AI-driven health benefits to propagate to even the poorest countries in the world. A good goal might be for the developing world 5-10 years after powerful AI to at least be substantially healthier than the developed world is today, even if it continues to lag behind the developed world. Accomplishing this will of course require a huge effort in global health, philanthropy, political advocacy, and many other efforts, which both AI developers and policymakers should help with.
Economic growth. Can the developing world quickly catch up to the developed world, not just in health, but across the board economically? There is some precedent for this: in the final decades of the 20th century, several East Asian economies achieved sustained ~10% annual real GDP growth rates, allowing them to catch up with the developed world. Human economic planners made the decisions that led to this success, not by directly controlling entire economies but by pulling a few key levers (such as an industrial policy of export-led growth, and resisting the temptation to rely on natural resource wealth); it’s plausible that “AI finance ministers and central bankers” could replicate or exceed this 10% accomplishment. An important question is how to get developing world governments to adopt them while respecting the principle of self-determination—some may be enthusiastic about it, but others are likely to be skeptical. On the optimistic side, many of the health interventions in the previous bullet point are likely to organically increase economic growth: eradicating AIDS/malaria/parasitic worms would have a transformative effect on productivity, not to mention the economic benefits that some of the neuroscience interventions (such as improved mood and focus) would have in developed and developing world alike. Finally, non-health AI-accelerated technology (such as energy technology, transport drones, improved building materials, better logistics and distribution, and so on) may simply permeate the world naturally; for example, even cell phones quickly permeated sub-Saharan Africa via market mechanisms, without needing philanthropic efforts. On the more negative side, while AI and automation have many potential benefits, they also pose challenges for economic development, particularly for countries that haven't yet industrialized. Finding ways to ensure these countries can still develop and improve their economies in an age of increasing automation is an important challenge for economists and policymakers to address. Overall, a dream scenario—perhaps a goal to aim for—would be 20% annual GDP growth rate in the developing world, with 10% each coming from AI-enabled economic decisions and the natural spread of AI-accelerated technologies, including but not limited to health. If achieved, this would bring sub-Saharan Africa to the current per-capita GDP of China in 5-10 years, while raising much of the rest of the developing world to levels higher than the current US GDP. Again, this is a dream scenario, not what happens by default: it’s something all of us must work together to make more likely.
Food security 24. Advances in crop technology like better fertilizers and pesticides, more automation, and more efficient land use drastically increased crop yields across the 20th Century, saving millions of people from hunger. Genetic engineering is currently improving many crops even further. Finding even more ways to do this—as well as to make agricultural supply chains even more efficient—could give us an AI-driven second Green Revolution, helping close the gap between the developing and developed world.
Mitigating climate change. Climate change will be felt much more strongly in the developing world, hampering its development. We can expect that AI will lead to improvements in technologies that slow or prevent climate change, from atmospheric carbon-removal and clean energy technology to lab-grown meat that reduces our reliance on carbon-intensive factory farming. Of course, as discussed above, technology isn’t the only thing restricting progress on climate change—as with all of the other issues discussed in this essay, human societal factors are important. But there’s good reason to think that AI-enhanced research will give us the means to make mitigating climate change far less costly and disruptive, rendering many of the objections moot and freeing up developing countries to make more economic progress.
Inequality within countries. I’ve mostly talked about inequality as a global phenomenon (which I do think is its most important manifestation), but of course inequality also exists within countries. With advanced health interventions and especially radical increases in lifespan or cognitive enhancement drugs, there will certainly be valid worries that these technologies are “only for the rich”. I am more optimistic about within-country inequality especially in the developed world, for two reasons. First, markets function better in the developed world, and markets are typically good at bringing down the cost of high-value technologies over time25. Second, developed world political institutions are more responsive to their citizens and have greater state capacity to execute universal access programs—and I expect citizens to demand access to technologies that so radically improve quality of life. Of course it’s not predetermined that such demands succeed—and here is another place where we collectively have to do all we can to ensure a fair society. There is a separate problem in inequality of wealth (as opposed to inequality of access to life-saving and life-enhancing technologies), which seems harder and which I discuss in Section 5.
The opt-out problem. One concern in both developed and developing world alike is people opting out of AI-enabled benefits (similar to the anti-vaccine movement, or Luddite movements more generally). There could end up being bad feedback cycles where, for example, the people who are least able to make good decisions opt out of the very technologies that improve their decision-making abilities, leading to an ever-increasing gap and even creating a dystopian underclass (some researchers have argued that this will undermine democracy, a topic I discuss further in the next section). This would, once again, place a moral blemish on AI’s positive advances. This is a difficult problem to solve as I don’t think it is ethically okay to coerce people, but we can at least try to increase people’s scientific understanding—and perhaps AI itself can help us with this. One hopeful sign is that historically anti-technology movements have been more bark than bite: railing against modern technology is popular, but most people adopt it in the end, at least when it’s a matter of individual choice. Individuals tend to adopt most health and consumer technologies, while technologies that are truly hampered, like nuclear power, tend to be collective political decisions.
Overall, I am optimistic about quickly bringing AI’s biological advances to people in the developing world. I am hopeful, though not confident, that AI can also enable unprecedented economic growth rates and allow the developing world to at least surpass where the developed world is now. I am concerned about the “opt out” problem in both the developed and developing world, but suspect that it will peter out over time and that AI can help accelerate this process. It won’t be a perfect world, and those who are behind won’t fully catch up, at least not in the first few years. But with strong efforts on our part, we may be able to get things moving in the right direction—and fast. If we do, we can make at least a downpayment on the promises of dignity and equality that we owe to every human being on earth.
4. Peace and governance
Suppose that everything in the first three sections goes well: disease, poverty, and inequality are significantly reduced and the baseline of human experience is raised substantially. It does not follow that all major causes of human suffering are solved. Humans are still a threat to each other. Although there is a trend of technological improvement and economic development leading to democracy and peace, it is a very loose trend, with frequent (and recent) backsliding. At the dawn of the 20th Century, people thought they had put war behind them; then came the two world wars. Thirty years ago Francis Fukuyama wrote about “the End of History” and a final triumph of liberal democracy; that hasn’t happened yet. Twenty years ago US policymakers believed that free trade with China would cause it to liberalize as it became richer; that very much didn’t happen, and we now seem headed for a second cold war with a resurgent authoritarian bloc. And plausible theories suggest that internet technology may actually advantage authoritarianism, not democracy as initially believed (e.g. in the “Arab Spring” period). It seems important to try to understand how powerful AI will intersect with these issues of peace, democracy, and freedom.
Unfortunately, I see no strong reason to believe AI will preferentially or structurally advance democracy and peace, in the same way that I think it will structurally advance human health and alleviate poverty. Human conflict is adversarial and AI can in principle help both the “good guys” and the “bad guys”. If anything, some structural factors seem worrying: AI seems likely to enable much better propaganda and surveillance, both major tools in the autocrat’s toolkit. It’s therefore up to us as individual actors to tilt things in the right direction: if we want AI to favor democracy and individual rights, we are going to have to fight for that outcome. I feel even more strongly about this than I do about international inequality: the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely, and will require great sacrifice and commitment on all of our parts, as it often has in the past.
I think of the issue as having two parts: international conflict, and the internal structure of nations. On the international side, it seems very important that democracies have the upper hand on the world stage when powerful AI is created. AI-powered authoritarianism seems too terrible to contemplate, so democracies need to be able to set the terms by which powerful AI is brought into the world, both to avoid being overpowered by authoritarians and to prevent human rights abuses within authoritarian countries.
My current guess at the best way to do this is via an “entente strategy”26, in which a coalition of democracies seeks to gain a clear advantage (even just a temporary one) on powerful AI by securing its supply chain, scaling quickly, and blocking or delaying adversaries’ access to key resources like chips and semiconductor equipment. This coalition would on one hand use AI to achieve robust military superiority (the stick) while at the same time offering to distribute the benefits of powerful AI (the carrot) to a wider and wider group of countries in exchange for supporting the coalition’s strategy to promote democracy (this would be a bit analogous to “Atoms for Peace”). The coalition would aim to gain the support of more and more of the world, isolating our worst adversaries and eventually putting them in a position where they are better off taking the same bargain as the rest of the world: give up competing with democracies in order to receive all the benefits and not fight a superior foe.
If we can do all this, we will have a world in which democracies lead on the world stage and have the economic and military strength to avoid being undermined, conquered, or sabotaged by autocracies, and may be able to parlay their AI superiority into a durable advantage. This could optimistically lead to an “eternal 1991”—a world where democracies have the upper hand and Fukuyama’s dreams are realized. Again, this will be very difficult to achieve, and will in particular require close cooperation between private AI companies and democratic governments, as well as extraordinarily wise decisions about the balance between carrot and stick.
Even if all that goes well, it leaves the question of the fight between democracy and autocracy within each country. It is obviously hard to predict what will happen here, but I do have some optimism that given a global environment in which democracies control the most powerful AI, then AI may actually structurally favor democracy everywhere. In particular, in this environment democratic governments can use their superior AI to win the information war: they can counter influence and propaganda operations by autocracies and may even be able to create a globally free information environment by providing channels of information and AI services in a way that autocracies lack the technical ability to block or monitor. It probably isn’t necessary to deliver propaganda, only to counter malicious attacks and unblock the free flow of information. Although not immediate, a level playing field like this stands a good chance of gradually tilting global governance towards democracy, for several reasons.
First, the increases in quality of life in Sections 1-3 should, all things equal, promote democracy: historically they have, to at least some extent. In particular I expect improvements in mental health, well-being, and education to increase democracy, as all three are negatively correlated with support for authoritarian leaders. In general people want more self-expression when their other needs are met, and democracy is among other things a form of self-expression. Conversely, authoritarianism thrives on fear and resentment.
Second, there is a good chance free information really does undermine authoritarianism, as long as the authoritarians can’t censor it. And uncensored AI can also bring individuals powerful tools for undermining repressive governments. Repressive governments survive by denying people a certain kind of common knowledge, keeping them from realizing that “the emperor has no clothes”. For example Srđa Popović, who helped to topple the Milošević government in Serbia, has written extensively about techniques for psychologically robbing authoritarians of their power, for breaking the spell and rallying support against a dictator. A superhumanly effective AI version of Popović (whose skills seem like they have high returns to intelligence) in everyone’s pocket, one that dictators are powerless to block or censor, could create a wind at the backs of dissidents and reformers across the world. To say it again, this will be a long and protracted fight, one where victory is not assured, but if we design and build AI in the right way, it may at least be a fight where the advocates of freedom everywhere have an advantage.
As with neuroscience and biology, we can also ask how things could be “better than normal”—not just how to avoid autocracy, but how to make democracies better than they are today. Even within democracies, injustices happen all the time. Rule-of-law societies make a promise to their citizens that everyone will be equal under the law and everyone is entitled to basic human rights, but obviously people do not always receive those rights in practice. That this promise is even partially fulfilled makes it something to be proud of, but can AI help us do better?
For example, could AI improve our legal and judicial system by making decisions and processes more impartial? Today people mostly worry in legal or judicial contexts that AI systems will be a cause of discrimination, and these worries are important and need to be defended against. At the same time, the vitality of democracy depends on harnessing new technologies to improve democratic institutions, not just responding to risks. A truly mature and successful implementation of AI has the potential to reduce bias and be fairer for everyone.
For centuries, legal systems have faced the dilemma that the law aims to be impartial, but is inherently subjective and thus must be interpreted by biased humans. Trying to make the law fully mechanical hasn’t worked because the real world is messy and can’t always be captured in mathematical formulas. Instead legal systems rely on notoriously imprecise criteria like “cruel and unusual punishment” or “utterly without redeeming social importance”, which humans then interpret—and often do so in a manner that displays bias, favoritism, or arbitrariness. “Smart contracts” in cryptocurrencies haven’t revolutionized law because ordinary code isn’t smart enough to adjudicate all that much of interest. But AI might be smart enough for this: it is the first technology capable of making broad, fuzzy judgements in a repeatable and mechanical way.
I am not suggesting that we literally replace judges with AI systems, but the combination of impartiality with the ability to understand and process messy, real world situations feels like it should have some serious positive applications to law and justice. At the very least, such systems could work alongside humans as an aid to decision-making. Transparency would be important in any such system, and a mature science of AI could conceivably provide it: the training process for such systems could be extensively studied, and advanced interpretability techniques could be used to see inside the final model and assess it for hidden biases, in a way that is simply not possible with humans. Such AI tools could also be used to monitor for violations of fundamental rights in a judicial or police context, making constitutions more self-enforcing.
In a similar vein, AI could be used to both aggregate opinions and drive consensus among citizens, resolving conflict, finding common ground, and seeking compromise. Some early ideas in this direction have been undertaken by the computational democracy project, including collaborations with Anthropic. A more informed and thoughtful citizenry would obviously strengthen democratic institutions.
There is also a clear opportunity for AI to be used to help provision government services—such as health benefits or social services—that are in principle available to everyone but in practice often severely lacking, and worse in some places than others. This includes health services, the DMV, taxes, social security, building code enforcement, and so on. Having a very thoughtful and informed AI whose job is to give you everything you’re legally entitled to by the government in a way you can understand—and who also helps you comply with often confusing government rules—would be a big deal. Increasing state capacity both helps to deliver on the promise of equality under the law, and strengthens respect for democratic governance. Poorly implemented services are currently a major driver of cynicism about government27.
All of these are somewhat vague ideas, and as I said at the beginning of this section, I am not nearly as confident in their feasibility as I am in the advances in biology, neuroscience, and poverty alleviation. They may be unrealistically utopian. But the important thing is to have an ambitious vision, to be willing to dream big and try things out. The vision of AI as a guarantor of liberty, individual rights, and equality under the law is too powerful a vision not to fight for. A 21st century, AI-enabled polity could be both a stronger protector of individual freedom, and a beacon of hope that helps make liberal democracy the form of government that the whole world wants to adopt.
5. Work and meaning
Even if everything in the preceding four sections goes well—not only do we alleviate disease, poverty, and inequality, but liberal democracy becomes the dominant form of government, and existing liberal democracies become better versions of themselves—at least one important question still remains. “It’s great we live in such a technologically advanced world as well as a fair and decent one”, someone might object, “but with AI’s doing everything, how will humans have meaning? For that matter, how will they survive economically?”.
I think this question is more difficult than the others. I don’t mean that I am necessarily more pessimistic about it than I am about the other questions (although I do see challenges). I mean that it is fuzzier and harder to predict in advance, because it relates to macroscopic questions about how society is organized that tend to resolve themselves only over time and in a decentralized manner. For example, historical hunter-gatherer societies might have imagined that life is meaningless without hunting and various kinds of hunting-related religious rituals, and would have imagined that our well-fed technological society is devoid of purpose. They might also have not understood how our economy can provide for everyone, or what function people can usefully service in a mechanized society.
Nevertheless, it’s worth saying at least a few words, while keeping in mind that the brevity of this section is not at all to be taken as a sign that I don’t take these issues seriously—on the contrary, it is a sign of a lack of clear answers.
On the question of meaning, I think it is very likely a mistake to believe that tasks you undertake are meaningless simply because an AI could do them better. Most people are not the best in the world at anything, and it doesn’t seem to bother them particularly much. Of course today they can still contribute through comparative advantage, and may derive meaning from the economic value they produce, but people also greatly enjoy activities that produce no economic value. I spend plenty of time playing video games, swimming, walking around outside, and talking to friends, all of which generates zero economic value. I might spend a day trying to get better at a video game, or faster at biking up a mountain, and it doesn’t really matter to me that someone somewhere is much better at those things. In any case I think meaning comes mostly from human relationships and connection, not from economic labor. People do want a sense of accomplishment, even a sense of competition, and in a post-AI world it will be perfectly possible to spend years attempting some very difficult task with a complex strategy, similar to what people do today when they embark on research projects, try to become Hollywood actors, or found companies28. The facts that (a) an AI somewhere could in principle do this task better, and (b) this task is no longer an economically rewarded element of a global economy, don’t seem to me to matter very much.
The economic piece actually seems more difficult to me than the meaning piece. By “economic” in this section I mean the possible problem that most or all humans may not be able to contribute meaningfully to a sufficiently advanced AI-driven economy. This is a more macro problem than the separate problem of inequality, especially inequality in access to the new technologies, which I discussed in Section 3.
First of all, in the short term I agree with arguments that comparative advantage will continue to keep humans relevant and in fact increase their productivity, and may even in some ways level the playing field between humans. As long as AI is only better at 90% of a given job, the other 10% will cause humans to become highly leveraged, increasing compensation and in fact creating a bunch of new human jobs complementing and amplifying what AI is good at, such that the “10%” expands to continue to employ almost everyone. In fact, even if AI can do 100% of things better than humans, but it remains inefficient or expensive at some tasks, or if the resource inputs to humans and AI’s are meaningfully different, then the logic of comparative advantage continues to apply. One area humans are likely to maintain a relative (or even absolute) advantage for a significant time is the physical world. Thus, I think that the human economy may continue to make sense even a little past the point where we reach “a country of geniuses in a datacenter”.
However, I do think in the long run AI will become so broadly effective and so cheap that this will no longer apply. At that point our current economic setup will no longer make sense, and there will be a need for a broader societal conversation about how the economy should be organized.
While that might sound crazy, the fact is that civilization has successfully navigated major economic shifts in the past: from hunter-gathering to farming, farming to feudalism, and feudalism to industrialism. I suspect that some new and stranger thing will be needed, and that it’s something no one today has done a good job of envisioning. It could be as simple as a large universal basic income for everyone, although I suspect that will only be a small part of a solution. It could be a capitalist economy of AI systems, which then give out resources (huge amounts of them, since the overall economic pie will be gigantic) to humans based on some secondary economy of what the AI systems think makes sense to reward in humans (based on some judgment ultimately derived from human values). Perhaps the economy runs on Whuffie points. Or perhaps humans will continue to be economically valuable after all, in some way not anticipated by the usual economic models. All of these solutions have tons of possible problems, and it’s not possible to know whether they will make sense without lots of iteration and experimentation. And as with some of the other challenges, we will likely have to fight to get a good outcome here: exploitative or dystopian directions are clearly also possible and have to be prevented. Much more could be written about these questions and I hope to do so at some later time.
Taking stock
Through the varied topics above, I’ve tried to lay out a vision of a world that is both plausible if everything goes right with AI, and much better than the world today. I don’t know if this world is realistic, and even if it is, it will not be achieved without a huge amount of effort and struggle by many brave and dedicated people. Everyone (including AI companies!) will need to do their part both to prevent risks and to fully realize the benefits.
But it is a world worth fighting for. If all of this really does happen over 5 to 10 years—the defeat of most diseases, the growth in biological and cognitive freedom, the lifting of billions of people out of poverty to share in the new technologies, a renaissance of liberal democracy and human rights—I suspect everyone watching it will be surprised by the effect it has on them. I don’t mean the experience of personally benefiting from all the new technologies, although that will certainly be amazing. I mean the experience of watching a long-held set of ideals materialize in front of us all at once. I think many will be literally moved to tears by it.
Throughout writing this essay I noticed an interesting tension. In one sense the vision laid out here is extremely radical: it is not what almost anyone expects to happen in the next decade, and will likely strike many as an absurd fantasy. Some may not even consider it desirable; it embodies values and political choices that not everyone will agree with. But at the same time there is something blindingly obvious—something overdetermined—about it, as if many different attempts to envision a good world inevitably lead roughly here.
In Iain M. Banks’ The Player of Games29, the protagonist—a member of a society called the Culture, which is based on principles not unlike those I’ve laid out here—travels to a repressive, militaristic empire in which leadership is determined by competition in an intricate battle game. The game, however, is complex enough that a player’s strategy within it tends to reflect their own political and philosophical outlook. The protagonist manages to defeat the emperor in the game, showing that his values (the Culture’s values) represent a winning strategy even in a game designed by a society based on ruthless competition and survival of the fittest. A well-known post by Scott Alexander has the same thesis—that competition is self-defeating and tends to lead to a society based on compassion and cooperation. The “arc of the moral universe” is another similar concept.
I think the Culture’s values are a winning strategy because they’re the sum of a million small decisions that have clear moral force and that tend to pull everyone together onto the same side. Basic human intuitions of fairness, cooperation, curiosity, and autonomy are hard to argue with, and are cumulative in a way that our more destructive impulses often aren’t. It is easy to argue that children shouldn’t die of disease if we can prevent it, and easy from there to argue that everyone’s children deserve that right equally. From there it is not hard to argue that we should all band together and apply our intellects to achieve this outcome. Few disagree that people should be punished for attacking or hurting others unnecessarily, and from there it’s not much of a leap to the idea that punishments should be consistent and systematic across people. It is similarly intuitive that people should have autonomy and responsibility over their own lives and choices. These simple intuitions, if taken to their logical conclusion, lead eventually to rule of law, democracy, and Enlightenment values. If not inevitably, then at least as a statistical tendency, this is where humanity was already headed. AI simply offers an opportunity to get us there more quickly—to make the logic starker and the destination clearer.
Nevertheless, it is a thing of transcendent beauty. We have the opportunity to play some small role in making it real.
Thanks to Kevin Esvelt, Parag Mallick, Stuart Ritchie, Matt Yglesias, Erik Brynjolfsson, Jim McClave, Allan Dafoe, and many people at Anthropic for reviewing drafts of this essay.
To the winners of the 2024 Nobel prize in Chemistry, for showing us all the way.
Footnotes
1https://allpoetry.com/All-Watched-Over-By-Machines-Of-Loving-Grace ↩
2I do anticipate some minority of people’s reaction will be “this is pretty tame”. I think those people need to, in Twitter parlance, “touch grass”. But more importantly, tame is good from a societal perspective. I think there’s only so much change people can handle at once, and the pace I’m describing is probably close to the limits of what society can absorb without extreme turbulence. ↩
3I find AGI to be an imprecise term that has gathered a lot of sci-fi baggage and hype. I prefer "powerful AI" or "Expert-Level Science and Engineering" which get at what I mean without the hype. ↩
4In this essay, I use "intelligence" to refer to a general problem-solving capability that can be applied across diverse domains. This includes abilities like reasoning, learning, planning, and creativity. While I use "intelligence" as a shorthand throughout this essay, I acknowledge that the nature of intelligence is a complex and debated topic in cognitive science and AI research. Some researchers argue that intelligence isn't a single, unified concept but rather a collection of separate cognitive abilities. Others contend that there's a general factor of intelligence (g factor) underlying various cognitive skills. That’s a debate for another time. ↩
5This is roughly the current speed of AI systems – for example they can read a page of text in a couple seconds and write a page of text in maybe 20 seconds, which is 10-100x the speed at which humans can do these things. Over time larger models tend to make this slower but more powerful chips tend to make it faster; to date the two effects have roughly canceled out. ↩
6This might seem like a strawman position, but careful thinkers like Tyler Cowen and Matt Yglesias have raised it as a serious concern (though I don’t think they fully hold the view), and I don’t think it is crazy. ↩
7The closest economics work that I’m aware of to tackling this question is work on “general purpose technologies” and “intangible investments” that serve as complements to general purpose technologies. ↩
8This learning can include temporary, in-context learning, or traditional training; both will be rate-limited by the physical world. ↩
9In a chaotic system, small errors compound exponentially over time, so that even an enormous increase in computing power leads to only a small improvement in how far ahead it is possible to predict, and in practice measurement error may degrade this further. ↩
10Another factor is of course that powerful AI itself can potentially be used to create even more powerful AI. My assumption is that this might (in fact, probably will) occur, but that its effect will be smaller than you might imagine, precisely because of the “decreasing marginal returns to intelligence” discussed here. In other words, AI will continue to get smarter quickly, but its effect will eventually be limited by non-intelligence factors, and analyzing those is what matters most to the speed of scientific progress outside AI. ↩
11These achievements have been an inspiration to me and perhaps the most powerful existing example of AI being used to transform biology. ↩
12“Progress in science depends on new techniques, new discoveries and new ideas, probably in that order.” - Sydney Brenner ↩
13Thanks to Parag Mallick for suggesting this point. ↩
14I didn't want to clog up the text with speculation about what specific future discoveries AI-enabled science could make, but here is a brainstorm of some possibilities:
— Design of better computational tools like AlphaFold and AlphaProteo — that is, a general AI system speeding up our ability to make specialized AI computational biology tools.
— More efficient and selective CRISPR.
— More advanced cell therapies.
— Materials science and miniaturization breakthroughs leading to better implanted devices.
— Better control over stem cells, cell differentiation, and de-differentiation, and a resulting ability to regrow or reshape tissue.
— Better control over the immune system: turning it on selectively to address cancer and infectious disease, and turning it off selectively to address autoimmune diseases. ↩
15AI may of course also help with being smarter about choosing what experiments to run: improving experimental design, learning more from a first round of experiments so that the second round can narrow in on key questions, and so on. ↩
16Thanks to Matthew Yglesias for suggesting this point. ↩
17Fast evolving diseases, like the multidrug resistant strains that essentially use hospitals as an evolutionary laboratory to continually improve their resistance to treatment, could be especially stubborn to deal with, and could be the kind of thing that prevents us from getting to 100%. ↩
18Note it may be hard to know that we have doubled the human lifespan within the 5-10 years. While we might have accomplished it, we may not know it yet within the study time-frame. ↩
19This is one place where I am willing, despite the obvious biological differences between curing diseases and slowing down the aging process itself, to instead look from a greater distance at the statistical trend and say “even though the details are different, I think human science would probably find a way to continue this trend; after all, smooth trends in anything complex are necessarily made by adding up very heterogeneous components. ↩
20As an example, I’m told that an increase in productivity growth per year of 1% or even 0.5% would be transformative in projections related to these programs. If the ideas contemplated in this essay come to pass, productivity gains could be much larger than this. ↩
21The media loves to portray high status psychopaths, but the average psychopath is probably a person with poor economic prospects and poor impulse control who ends up spending significant time in prison. ↩
22I think this is somewhat analogous to the fact that many, though likely not all, of the results we’re learning from interpretability would continue to be relevant even if some of the architectural details of our current artificial neural nets, such as the attention mechanism, were changed or replaced in some way. ↩
23I suspect it is a bit like a classical chaotic system – beset by irreducible complexity that has to be managed in a mostly decentralized manner. Though as I say later in this section, more modest interventions may be possible. A counterargument, made to me by economist Erik Brynjolfsson, is that large companies (such as Walmart or Uber) are starting to have enough centralized knowledge to understand consumers better than any decentralized process could, perhaps forcing us to revise Hayek’s insights about who has the best local knowledge. ↩
24Thanks to Kevin Esvelt for suggesting this point. ↩
25For example, cell phones were initially a technology for the rich, but quickly became very cheap with year-over-year improvements happening so fast as to obviate any advantage of buying a “luxury” cell phone, and today most people have phones of similar quality. ↩
26This is the title of a forthcoming paper from RAND, that lays out roughly the strategy I describe. ↩
27When the average person thinks of public institutions, they probably think of their experience with the DMV, IRS, medicare, or similar functions. Making these experiences more positive than they currently are seems like a powerful way to combat undue cynicism. ↩
28Indeed, in an AI-powered world, the range of such possible challenges and projects will be much vaster than it is today. ↩
29I am breaking my own rule not to make this about science fiction, but I’ve found it hard not to refer to it at least a bit. The truth is that science fiction is one of our only sources of expansive thought experiments about the future; I think it says something bad that it’s entangled so heavily with a particular narrow subculture. ↩
1 note
·
View note
Text
네가 죽은 뒤 장례식을 치르지 못해, 내 삶이 장례식이 되었다. 네가 방수 모포에 싸여 청소차에 실려간 뒤에. 용서할 수 없는 물줄기가 번쩍이며 분수대에서 뿜어져 나온 뒤에. 어디서나 사원의 불빛이 타고 있었다. 봄에 피는 꽃들 속에, 눈송이들 속에. 날마다 찾아오는 저녁들 속에. 다 쓴 음료수 병에 네가 꽂은 양초 불꽃들이. 《소년이 온다》 中
1 note
·
View note
Text
오얏꽃 한 가지 뜰에 섰으니
희고 흰 빛이 부끄럽지 않구나
맑은 하늘에 비가 오는구나
그칠 줄을 모르는구나
나는 비를 맞고만 있었다
금석이 빗물에 씻겼다
바람처럼 흩어졌다
붉게 물든 달을 보라.
강에는 배들도 보이지 않는다.
1 note
·
View note
Text
오늘의 햇빛은 감정을 지우는 데 쓸모가 있다.
공공장소에는 비둘기들이 어울려.
새들에게도 혈액형이 있고
그들만의 경험이 있을 것이다.
하지만 사람들은 꾸준히 거짓말을 하며 걸어다녀.
누군가는 매일 혈액형이 바뀌고
누군가는 피의 종류를 모르지만
아이들은 열심히 새로운 습관을 만들었네.
오늘의 날씨는 쉽게 솔직해져.
갑자기 쏟아지는 빗방울들이
자기 자신을 향해 나아가듯이.
길가에 납작해진 비둘기가 조금씩
길이 되어가듯이.
약국 셔터 아래로 신문들이 쌓이고
피를 뽑은 후에 사람들은
가벼워진 몸으로 다시
어제의 거짓말을 시작했다.
공공장소에서는 누구나
경험이 풍부한 사람이 되고
피의 종류에 대해
해박해지고
이장욱, 피의 종류, 생년월일 中
0 notes
Text
이 모든 것은 여담일 뿐, 내 정신이 오락가락한다고 생각하지는 않았으면 좋겠다. 내 정신은 오락가락하지만 내가 바라는 것 이상으로 그러지는 않는다.
리어노라캐링턴(이지원옮김), 『귀나팔』
1 note
·
View note
Text
가난하고 천한 것에는 머물 수 없고, 잘못을 벗겨내 공을 세우면 부귀가 오게 될 것입니다. 또 호랑이 굴에 들어가지 않고 어찌 호랑이 새끼를 잡겠습니까?
- 여몽
0 notes
Text
한 번이라도 좋으니 혼자서 끝까지 번역해 보고 싶다가도 마지막에 이르러 되돌아갈 수 없을 때 부당한 결단을 내려야 할 때가 올까 봐 무섭다. (…) “저는 어떤 역할도 맡고 싶지 않아요.저는 번역자니까요.” 하고 발뺌해도 그때만 괜찮지 조금 시간이 지나면 또다시 결단을 내려야 하는 순간이 온다. 정말이지 번역은 내내 결단을 내려야 하는 작업이다. 그래서 나는 번역을 완성하고 싶지 않다. 완성하고 싶지도 않고 당연히 도중에 그만두고 싶지도 않다. 질질 끌면서 하는 것 외에는 묘안이 없다.
다와다 요코(유라주 옮김), 『글자를 옮기는 사람』
1 note
·
View note
Text
모더니즘의 질문은 무대에 ‘던져짐’으로써 하이데거가 “세상에 내던져져 있음(Geworfenheit)”이라고 표현한 생경함을 일깨운다. 로메오 카스텔루치가 연출한 공연 작품 「지옥」(2008)에서 둔탁한 공기의 파장을 일으키며 무대 바닥으로 가차 없이 내동댕이쳐지는 텔레비전 모니터들처럼, 무대에 ‘던져진’ 모더니즘의 화두들은 현실에서 공명한다. 지금, 이 순간.
서현석·김성희, 『미래 예술』
1 note
·
View note
Text
너랑 나는 화단에 앉아 사랑에 대해 이야기했다. 사람의 목소리를 녹음해서 틀고 그걸 다시 녹음하고 녹음한 걸 다시 틀고 다시 녹음하고 또 틀고 또 다시 녹음하고 이런 식의 과정을 계속해서 거치면 마지막에 남는 건 돌고래 울음소리 같은 어떤 음파뿐이래. 그래 그건 정말 사랑인 것 같다. 그걸로 시를 써야겠다. 그렇게 얘기하며 화단에 앉아 옥수수를 먹었다.
너는 내가 진통할 때 전화를 했다. 나는 죽을 거 같아 전화 같은 건 안중에도 없었다. 너는 내기에서 이겼다고 그럴 줄 알았다고 좋아했다. 도무지 어떤 일도 끼어들 수 없는 비좁은 벽 사이에서. 혼자 주먹으로 벽을 내리치며 울었다. 윤은 소파에 앉아 안절부절 핸드폰을 보고. 나는 오늘 유 캔 네버 고 홈 어게인을 다시 읽었다. 그 시가 제일 좋다. 나는 그렇다.
옥수수는 은박지에 싸여 있었다. 김밥인 줄 알았다. 그런데 옥수수였고 옥수수를 먹는 일은 사랑에 대해 이야기하는 것과 썩 잘 어울리니까. 그런데 거꾸로, 돌고래 울음을 녹음하고 틀고 녹음하기를 반복한다면 어떻게 될까. 그건 모른다. 모르지만 너무 슬플 것 같다.
오늘은 너랑 소파에 앉아 시간이 길게 길게 늘어지다가 뒤집혀버리는 순간에 대해 이야기했다. 어쩔 때는 림보에 갇혀 있는 기분도 든다. 그치만 행복한 무엇이 무형의 뿔처럼 조금씩 자란다. 나는 현상과 감정에 무연해지고 있다. 너도 그렇다고 했다. 그 이후에 무엇을 쓸 수 있을지 생각한다고. 나도 생각해야겠다고 속으로 다짐했다. 그 이후와 이후에 씌어진 시와 그 시의 이후에서부터 다시 씌어진 이후와…… 이것을 무수히 반복한 다음.
바다에서 떠내려온 닳고 반짝이는 유리조각을 주웠다.
사랑에 대해 말하고 싶다.
외계인이 있다고 생각했다.
사랑의 역사 전문, 백은선
1 note
·
View note