How civilization could destroy itself — and 4 ways we could prevent it | Nick Bostrom

How civilization could destroy itself — and 4 ways we could prevent it | Nick Bostrom


Chris Anderson: Nick Bostrom. So, you have already given us
so many crazy ideas out there. I think a couple of decades ago, you made the case that we might
all be living in a simulation, or perhaps probably were. More recently, you’ve painted the most vivid examples
of how artificial general intelligence could go horribly wrong. And now this year, you’re about to publish a paper that presents something called
the vulnerable world hypothesis. And our job this evening is to
give the illustrated guide to that. So let’s do that. What is that hypothesis? Nick Bostrom: It’s trying to think about a sort of structural feature
of the current human condition. You like the urn metaphor, so I’m going to use that to explain it. So picture a big urn filled with balls representing ideas, methods,
possible technologies. You can think of the history
of human creativity as the process of reaching into this urn
and pulling out one ball after another, and the net effect so far
has been hugely beneficial, right? We’ve extracted a great many white balls, some various shades of gray,
mixed blessings. We haven’t so far
pulled out the black ball — a technology that invariably destroys
the civilization that discovers it. So the paper tries to think
about what could such a black ball be. CA: So you define that ball as one that would inevitably
bring about civilizational destruction. NB: Unless we exit what I call
the semi-anarchic default condition. But sort of, by default. CA: So, you make the case compelling by showing some sort of counterexamples where you believe that so far
we’ve actually got lucky, that we might have pulled out
that death ball without even knowing it. So there’s this quote, what’s this quote? NB: Well, I guess
it’s just meant to illustrate the difficulty of foreseeing what basic discoveries will lead to. We just don’t have that capability. Because we have become quite good
at pulling out balls, but we don’t really have the ability
to put the ball back into the urn, right. We can invent, but we can’t un-invent. So our strategy, such as it is, is to hope that there is
no black ball in the urn. CA: So once it’s out, it’s out,
and you can’t put it back in, and you think we’ve been lucky. So talk through a couple
of these examples. You talk about different
types of vulnerability. NB: So the easiest type to understand is a technology
that just makes it very easy to cause massive amounts of destruction. Synthetic biology might be a fecund
source of that kind of black ball, but many other possible things we could — think of geoengineering,
really great, right? We could combat global warming, but you don’t want it
to get too easy either, you don’t want any random person
and his grandmother to have the ability to radically
alter the earth’s climate. Or maybe lethal autonomous drones, massed-produced, mosquito-sized
killer bot swarms. Nanotechnology,
artificial general intelligence. CA: You argue in the paper that it’s a matter of luck
that when we discovered that nuclear power could create a bomb, it might have been the case that you could have created a bomb with much easier resources,
accessible to anyone. NB: Yeah, so think back to the 1930s where for the first time we make
some breakthroughs in nuclear physics, some genius figures out that it’s possible
to create a nuclear chain reaction and then realizes
that this could lead to the bomb. And we do some more work, it turns out that what you require
to make a nuclear bomb is highly enriched uranium or plutonium, which are very difficult materials to get. You need ultracentrifuges, you need reactors, like,
massive amounts of energy. But suppose it had turned out instead there had been an easy way
to unlock the energy of the atom. That maybe by baking sand
in the microwave oven or something like that you could have created
a nuclear detonation. So we know that that’s
physically impossible. But before you did the relevant physics how could you have known
how it would turn out? CA: Although, couldn’t you argue that for life to evolve on Earth that implied sort of stable environment, that if it was possible to create
massive nuclear reactions relatively easy, the Earth would never have been stable, that we wouldn’t be here at all. NB: Yeah, unless there were something
that is easy to do on purpose but that wouldn’t happen by random chance. So, like things we can easily do, we can stack 10 blocks
on top of one another, but in nature, you’re not going to find,
like, a stack of 10 blocks. CA: OK, so this is probably the one that many of us worry about most, and yes, synthetic biology
is perhaps the quickest route that we can foresee
in our near future to get us here. NB: Yeah, and so think
about what that would have meant if, say, anybody by working
in their kitchen for an afternoon could destroy a city. It’s hard to see how
modern civilization as we know it could have survived that. Because in any population
of a million people, there will always be some
who would, for whatever reason, choose to use that destructive power. So if that apocalyptic residual would choose to destroy a city, or worse, then cities would get destroyed. CA: So here’s another type
of vulnerability. Talk about this. NB: Yeah, so in addition to these
kind of obvious types of black balls that would just make it possible
to blow up a lot of things, other types would act
by creating bad incentives for humans to do things that are harmful. So, the Type-2a, we might call it that, is to think about some technology
that incentivizes great powers to use their massive amounts of force
to create destruction. So, nuclear weapons were actually
very close to this, right? What we did, we spent
over 10 trillion dollars to build 70,000 nuclear warheads and put them on hair-trigger alert. And there were several times
during the Cold War we almost blew each other up. It’s not because a lot of people felt
this would be a great idea, let’s all spend 10 trillion dollars
to blow ourselves up, but the incentives were such
that we were finding ourselves — this could have been worse. Imagine if there had been
a safe first strike. Then it might have been very tricky, in a crisis situation, to refrain from launching
all their nuclear missiles. If nothing else, because you would fear
that the other side might do it. CA: Right, mutual assured destruction kept the Cold War relatively stable, without that, we might not be here now. NB: It could have been
more unstable than it was. And there could be
other properties of technology. It could have been harder
to have arms treaties, if instead of nuclear weapons there had been some smaller thing
or something less distinctive. CA: And as well as bad incentives
for powerful actors, you also worry about bad incentives
for all of us, in Type-2b here. NB: Yeah, so, here we might
take the case of global warming. There are a lot of little conveniences that cause each one of us to do things that individually
have no significant effect, right? But if billions of people do it, cumulatively, it has a damaging effect. Now, global warming
could have been a lot worse than it is. So we have the climate
sensitivity parameter, right. It’s a parameter that says
how much warmer does it get if you emit a certain amount
of greenhouse gases. But, suppose that it had been the case that with the amount
of greenhouse gases we emitted, instead of the temperature rising by, say, between three and 4.5 degrees by 2100, suppose it had been
15 degrees or 20 degrees. Like, then we might have been
in a very bad situation. Or suppose that renewable energy
had just been a lot harder to do. Or that there had been
more fossil fuels in the ground. CA: Couldn’t you argue
that if in that case of — if what we are doing today had resulted in 10 degrees difference
in the time period that we could see, actually humanity would have got
off its ass and done something about it. We’re stupid, but we’re not
maybe that stupid. Or maybe we are. NB: I wouldn’t bet on it. (Laughter) You could imagine other features. So, right now, it’s a little bit difficult
to switch to renewables and stuff, right, but it can be done. But it might just have been,
with slightly different physics, it could have been much more expensive
to do these things. CA: And what’s your view, Nick? Do you think, putting
these possibilities together, that this earth, humanity that we are, we count as a vulnerable world? That there is a death ball in our future? NB: It’s hard to say. I mean, I think there might
well be various black balls in the urn, that’s what it looks like. There might also be some golden balls that would help us
protect against black balls. And I don’t know which order
they will come out. CA: I mean, one possible
philosophical critique of this idea is that it implies a view
that the future is essentially settled. That there either
is that ball there or it’s not. And in a way, that’s not a view of the future
that I want to believe. I want to believe
that the future is undetermined, that our decisions today will determine what kind of balls
we pull out of that urn. NB: I mean, if we just keep inventing, like, eventually we will
pull out all the balls. I mean, I think there’s a kind
of weak form of technological determinism that is quite plausible, like, you’re unlikely
to encounter a society that uses flint axes and jet planes. But you can almost think
of a technology as a set of affordances. So technology is the thing
that enables us to do various things and achieve various effects in the world. How we’d then use that,
of course depends on human choice. But if we think about these
three types of vulnerability, they make quite weak assumptions
about how we would choose to use them. So a Type-1 vulnerability, again,
this massive, destructive power, it’s a fairly weak assumption to think that in a population
of millions of people there would be some that would choose
to use it destructively. CA: For me, the most single
disturbing argument is that we actually might have
some kind of view into the urn that makes it actually
very likely that we’re doomed. Namely, if you believe
in accelerating power, that technology inherently accelerates, that we build the tools
that make us more powerful, then at some point you get to a stage where a single individual
can take us all down, and then it looks like we’re screwed. Isn’t that argument quite alarming? NB: Ah, yeah. (Laughter) I think — Yeah, we get more and more power, and [it’s] easier and easier
to use those powers, but we can also invent technologies
that kind of help us control how people use those powers. CA: So let’s talk about that,
let’s talk about the response. Suppose that thinking
about all the possibilities that are out there now — it’s not just synbio,
it’s things like cyberwarfare, artificial intelligence, etc., etc. — that there might be
serious doom in our future. What are the possible responses? And you’ve talked about
four possible responses as well. NB: Restricting technological development
doesn’t seem promising, if we are talking about a general halt
to technological progress. I think neither feasible, nor would it be desirable
even if we could do it. I think there might be very limited areas where maybe you would want
slower technological progress. You don’t, I think, want
faster progress in bioweapons, or in, say, isotope separation, that would make it easier to create nukes. CA: I mean, I used to be
fully on board with that. But I would like to actually
push back on that for a minute. Just because, first of all, if you look at the history
of the last couple of decades, you know, it’s always been
push forward at full speed, it’s OK, that’s our only choice. But if you look at globalization
and the rapid acceleration of that, if you look at the strategy of
“move fast and break things” and what happened with that, and then you look at the potential
for synthetic biology, I don’t know that we should
move forward rapidly or without any kind of restriction to a world where you could have
a DNA printer in every home and high school lab. There are some restrictions, right? NB: Possibly, there is
the first part, the not feasible. If you think it would be
desirable to stop it, there’s the problem of feasibility. So it doesn’t really help
if one nation kind of — CA: No, it doesn’t help
if one nation does, but we’ve had treaties before. That’s really how we survived
the nuclear threat, was by going out there and going through
the painful process of negotiating. I just wonder whether the logic isn’t
that we, as a matter of global priority, we shouldn’t go out there and try, like, now start negotiating
really strict rules on where synthetic bioresearch is done, that it’s not something
that you want to democratize, no? NB: I totally agree with that — that it would be desirable, for example, maybe to have DNA synthesis machines, not as a product where each lab
has their own device, but maybe as a service. Maybe there could be
four or five places in the world where you send in your digital blueprint
and the DNA comes back, right? And then, you would have the ability, if one day it really looked
like it was necessary, we would have like,
a finite set of choke points. So I think you want to look
for kind of special opportunities, where you could have tighter control. CA: Your belief is, fundamentally, we are not going to be successful
in just holding back. Someone, somewhere —
North Korea, you know — someone is going to go there
and discover this knowledge, if it’s there to be found. NB: That looks plausible
under current conditions. It’s not just synthetic biology, either. I mean, any kind of profound,
new change in the world could turn out to be a black ball. CA: Let’s look at
another possible response. NB: This also, I think,
has only limited potential. So, with the Type-1 vulnerability again, I mean, if you could reduce the number
of people who are incentivized to destroy the world, if only they could get
access and the means, that would be good. CA: In this image that you asked us to do you’re imagining these drones
flying around the world with facial recognition. When they spot someone
showing signs of sociopathic behavior, they shower them with love, they fix them. NB: I think it’s like a hybrid picture. Eliminate can either mean,
like, incarcerate or kill, or it can mean persuade them
to a better view of the world. But the point is that, suppose you were
extremely successful in this, and you reduced the number
of such individuals by half. And if you want to do it by persuasion, you are competing against
all other powerful forces that are trying to persuade people, parties, religion, education system. But suppose you could reduce it by half, I don’t think the risk
would be reduced by half. Maybe by five or 10 percent. CA: You’re not recommending that we gamble
humanity’s future on response two. NB: I think it’s all good
to try to deter and persuade people, but we shouldn’t rely on that
as our only safeguard. CA: How about three? NB: I think there are two general methods that we could use to achieve
the ability to stabilize the world against the whole spectrum
of possible vulnerabilities. And we probably would need both. So, one is an extremely effective ability to do preventive policing. Such that you could intercept. If anybody started to do
this dangerous thing, you could intercept them
in real time, and stop them. So this would require
ubiquitous surveillance, everybody would be monitored all the time. CA: This is “Minority Report,”
essentially, a form of. NB: You would have maybe AI algorithms, big freedom centers
that were reviewing this, etc., etc. CA: You know that mass surveillance
is not a very popular term right now? (Laughter) NB: Yeah, so this little device there, imagine that kind of necklace
that you would have to wear at all times with multidirectional cameras. But, to make it go down better, just call it the “freedom tag”
or something like that. (Laughter) CA: OK. I mean, this is the conversation, friends, this is why this is
such a mind-blowing conversation. NB: Actually, there’s
a whole big conversation on this on its own, obviously. There are huge problems and risks
with that, right? We may come back to that. So the other, the final, the other general stabilization capability is kind of plugging
another governance gap. So the surveillance would be kind of
governance gap at the microlevel, like, preventing anybody
from ever doing something highly illegal. Then, there’s a corresponding
governance gap at the macro level, at the global level. You would need the ability, reliably, to prevent the worst kinds
of global coordination failures, to avoid wars between great powers, arms races, cataclysmic commons problems, in order to deal with
the Type-2a vulnerabilities. CA: Global governance is a term that’s definitely way out
of fashion right now, but could you make the case
that throughout history, the history of humanity is that at every stage
of technological power increase, people have reorganized
and sort of centralized the power. So, for example,
when a roving band of criminals could take over a society, the response was,
well, you have a nation-state and you centralize force,
a police force or an army, so, “No, you can’t do that.” The logic, perhaps, of having
a single person or a single group able to take out humanity means at some point
we’re going to have to go this route, at least in some form, no? NB: It’s certainly true that the scale
of political organization has increased over the course of human history. It used to be hunter-gatherer band, right, and then chiefdom, city-states, nations, now there are international organizations
and so on and so forth. Again, I just want to make sure I get the chance to stress that obviously there are huge downsides and indeed, massive risks, both to mass surveillance
and to global governance. I’m just pointing out
that if we are lucky, the world could be such
that these would be the only ways you could survive a black ball. CA: The logic of this theory, it seems to me, is that we’ve got to recognize
we can’t have it all. That the sort of, I would say, naive dream
that many of us had that technology is always
going to be a force for good, keep going, don’t stop,
go as fast as you can and not pay attention
to some of the consequences, that’s actually just not an option. We can have that. If we have that, we’re going to have to accept some of these other
very uncomfortable things with it, and kind of be in this
arms race with ourselves of, you want the power,
you better limit it, you better figure out how to limit it. NB: I think it is an option, a very tempting option,
it’s in a sense the easiest option and it might work, but it means we are fundamentally
vulnerable to extracting a black ball. Now, I think with a bit of coordination, like, if you did solve this
macrogovernance problem, and the microgovernance problem, then we could extract
all the balls from the urn and we’d benefit greatly. CA: I mean, if we’re living
in a simulation, does it matter? We just reboot. (Laughter) NB: Then … I … (Laughter) I didn’t see that one coming. CA: So what’s your view? Putting all the pieces together,
how likely is it that we’re doomed? (Laughter) I love how people laugh
when you ask that question. NB: On an individual level, we seem to kind of be doomed anyway,
just with the time line, we’re rotting and aging
and all kinds of things, right? (Laughter) It’s actually a little bit tricky. If you want to set up
so that you can attach a probability, first, who are we? If you’re very old,
probably you’ll die of natural causes, if you’re very young,
you might have a 100-year — the probability might depend
on who you ask. Then the threshold, like, what counts
as civilizational devastation? In the paper I don’t require
an existential catastrophe in order for it to count. This is just a definitional matter, I say a billion dead, or a reduction of world GDP by 50 percent, but depending on what
you say the threshold is, you get a different probability estimate. But I guess you could
put me down as a frightened optimist. (Laughter) CA: You’re a frightened optimist, and I think you’ve just created
a large number of other frightened … people. (Laughter) NB: In the simulation. CA: In a simulation. Nick Bostrom, your mind amazes me, thank you so much for scaring
the living daylights out of us. (Applause)

100 thoughts on “How civilization could destroy itself — and 4 ways we could prevent it | Nick Bostrom

  1. This talk generated a number of observations, I’ll touch on a few.

    Arrogance on a macro level- humans have complete control of our planet and the future rests entirely on human decisions and actions/inactions. Arrogance on a micro level (and I detect this in many TED talks)…TED is an exclusive society, of course speakers and those bright enough to listen and agree with them are of course intellectually and otherwise superior and would do a much better job than the “little people”, the unwashed masses in about anything. In fact, I think I heard, “humanity is stupid” or some analogous remark. This is a dangerous road leading to more government, increased power ceded to government and less freedom because the underlying assumption is that the average person is incapable of making their own decisions so the smart people must herd the masses for their own good.

    How much more interesting and authentic would TED be if every or even some talks intentionally included competent people freely presenting counter arguments in many or most of these discussions?

    But of course we all know that we must believe that we can make a difference to even start moving and fear or self-protection is the most effective way to cause action. We must then find a sense of belonging, or community to maintain momentum and add status for being part of something important. Inserting a turd in the punch bowl would ruin the party but phrases like “settled science” and “consensus” and especially this intellectual superiority dynamic perhaps is a danger signal. Of course, this would require TED leadership to value and behave with humility and be capable of introspection and honesty. Maybe a 12-step program would be helpful in revealing this truth- it’s hard to admit we’ve got a problem.

    Intelligent debate is healthy, impressive, interesting and persuasive but I find legitimate and compelling skepticism on about any topic severely lacking in this community. In fact I find it quite sectarian, biased, exclusive and arrogant and this is either a significant weakness or a tremendous opportunity. Does TED really want to be intellectually inclusive or remain on its comfortably exclusive path?

    TED is a wonderful forum, it could be so much more so if legitimate inclusionary intellectual curiosity and debate was valued and happened, our world needs an example of what true inclusion and respectful intellectual debate looks like.

    One of the topics that I haven’t yet watched covers something along the lines of the difficulty of finding truth today. It’s extraordinarily difficult to find truth on nearly any topic, increasingly so every year and TED could be the place, the platform to find truth based research, facts, the worlds leading experts exclusive of and ignoring popularity, politics or dissension.

    What a wonderful opportunity. Platform is here, opportunity is wide open, need is urgent.

  2. They are referencing the "law of the jungle" where anything goes but God is the architect, the keeper and judge of all things.

  3. A lot of confidence in a highly speculative matter… Don't get me wrong it is good thinking of what could go wrong, but we should be aware that it is really unknown… And things like AI and advanced technology could benefit greatly humanity, like technology always did… Imagine life some centuries ago…. It was miserable for 90% of the people… And even if you where a king if you caught some illness or even a toothache, you where in for a much roughther hide then a lot of poor people from today…

  4. 9:19 "The future is not set. There is no fate but what we make for ourselves."

    This is why 'Terminator' the most relevant movie franchise. The metaphor is real, the battle is Man Vs. Machine, or in other words, man vs. his own creations, or himself. The logical part of his psyche that is augmented through technology.

  5. God this is simpleton analysis. Not a hint of awareness of the fact that our destruction has anything to do with resource depletion, ecological destruction due to over-population. Just a focus on which "good inventions" and "bad inventions".

  6. I somehow don't like this discussion. Actually the first assumption that the balls we draw from the urn are of clearly predefined colour. I would argue that we, as humanity, colour them. They are mostly neutral, then we use them for what we think is good, e.g. internal combustion engine, but after a while the negatives start to outweigh the positives…

  7. As depressing that it may be to hear this types of conversations, its important we take note about the state of humanity.

  8. Mass surveillance is well on it's way to being implemented ( windows 10, 5 G network, 'the internet of things') then comes to play 'true AI'. The 2%ers are in a mad dash to develop AI, once done and total mass surveillance is in full swing, the other 98% of us will be phased out of the work force and shortly there after eliminated from the earths populace. Once such a large percentage of the worlds population is deleted, the earth/nature would be able to easily recuperate from global warming leaving a vast paradise for the 2 percenters to shape to their own wills. If you think I'm wrong, take a real close look at history. Why are the 2% that own everything so willing to kill to own even more?

  9. Prefrontal cortex is related to conciousness.. so it protects us . So it stops us from colossal unethical stuff. Unless we r overwhelmed by anger.

  10. Over abundance of innovation is a black ball causing humans to loose jobs n replaced… the luddite is extreme. But technologist is too. We shouldn't work hard to replace humans. Because we ARE humans?? 🙄🤔i love humans dont try to replace me with a robot.thats not nice

  11. Limited finite life on earth n unique personality n DNA is the only thing that governs value. If theres unlimited Adriana.i will have no value.too replaceable.

  12. If u build a DNA machine. All countries will print their own nationalist Hitler head of defence. To manage their military. To exercise their might.taking low to another level. As if death methods carries meaning.doesnt matter if u generate antihydrogen by smashing atoms or what (idk how thats made) but death by sophistication or by stupidity or pocket knife or natural disaster is still death 1.0. Theres no credit or death 2.0 due to method of death.

  13. War has to happen n global cleanse has to happen once in a great while to destroy stuff like accumilated stupid culture like tick tock or Aztec human sacrifices or sueing people millions for calling u names or burying baby girls… sometimes rebellious humans gets overbearing n dysfunctional that god has to shake the magnetic doodle pad n start over.a high price for aquipting a creation with innovative might n free will sprinkled with passion n anger and aspergers.

  14. Its not a black ball or white ball.its a ying yang ball not devil not angel not 100% evil not 100% angel. A human. A ying yang ball.

  15. نرجو أن تكون جميع الفيديوهات مصحوبة بترجمة للعربية حتى تعم الفائدة ولكم جزيل الشكر

  16. If you control the bad actor with technology for preventing destruction, it's exactly the same.
    Can you really trust the controller? Wouldn't it give to much power to someone holding it?
    In my opinion "the good actor" who act against the "the bad actor by controlling with tech become himself a potential " bad actor".

  17. Social media and the internet are destroying civilization. We have an entire generation that believes it HAS to have an online "brand" or they aren't good enough. That they HAVE to be social activists. That telephones are rude to use to call people because you're "interrupting" the person you're calling. Every opinion is valid because everyone has one.

  18. This feels improperly thought through. The premise of the thought exercise is that from outside the urn your cannot see what's coming out next. So, how can one forbid research into things one doesn't know exist, and what could possibly be gained from surveying everyone when one doesn't know what one is looking for?

    The world is inherently uncertain. There is no way to get complete certainty and security. Even if you stopped educating the general population and eventually benighted ignorance reigned supreme and we'd all start seeing dying of the flu or abscesses or diarrhoea, who's to say that some bright young thing 5000 years down the line won't start the search for knowledge again and undo all this system of planned perfect misery that we have gone out of our way to create?!

    It won't work.

  19. I would submit that several black balls have already been removed from the urn.The color fades with time as we tend to move away from the proverbial cliff.

  20. I'll summarize everything in a sentence:
    Ego,pride and weapon of mass destruction will end humanity ,when world war 3 happen it will be the last war humanity will witness…

  21. I am being optimistic here, but there is another possibility it appears Bostrom has not considered, and it is that physics knowledge is like a book with an ending, and we could read (understand) the whole of it before trying out each possible technological innovation. That would allow for preventing or not creating those innovations which end us. This means knowing which black balls are in the jar and just not pulling them out, or getting the golden ones which solve the threat in a different manner.

    I think that what I have described is both what we need to hope for and to work for.

  22. 1:39 we don't know if we have pulled out the black ball, and even if we HAVE pulled out the black ball, there is no certainty that we won't break it.

  23. The comments section looks like what the sound of the murmurs by background characters in a dark movie that makes the viewers intentionally enter the grey area of their morals

  24. We need to use crowd funding worldwide to make the Sahara Desert green, I believe the amount of money raised in this way would be astounding and allow for immediate action. We have the technology to do so now. Have you looked at the size of the desert in Africa? It keeps growing and displacing people causing unrest war etc. Carbon capture, lowering runaway weather as is happening as we speak. China is reclaiming the Gobi Desert . Google how to reclaim a desert. We need volunteers in every country to make it happen. Thank You.

  25. The error was allowing a journalist to interview Bostrom. The interviewer is clearly too dumb for the job. This is a very important topic and he blew it.

  26. You are all being manipulated

    Don’t be foolish and believe that global warming is due to automobile exhaust….

    The rise of volcanic activity and the omissions from volcanoes is on the order of 10 trillion times more detrimental to contributing greenhouse gas than automobiles throughout history

    Automobiles have contributed but at such a low level… next you’re going to start talking about a hole in the ozone layer and how it was caused by hairspray and PFCs and such nonsense – while at the same time completely dismissing the thousands upon thousands of nuclear explosions mankind has set off on earth….

    We need to stop Looking at our differences and find commonality between each of us and the struggles that we all go through

    We need to stop fighting wars especially ones that have been going on for thousands of years with no end.

    We need to look at the common struggles that we have we also need to stop being manipulated by those that control the media

    Most people have no idea the strides that humanity has made in combating poverty across the world and bringing up Third World nations they have no idea the successes that have been made in countries such as Africa all at the same time that people are waging war

    Our societies have become focused on money and the exchange of that money over the greater good of mankind

  27. I don't even need to watch the video more than a few seconds to see that these weak, mewling, effeminate man-children are the major source of the problem.

  28. " Bullshit-Lips "……….Someone who has the ability to BULLSHIT people into believing things they never did before.
    Yea…………We all waiting for an authoritarian " World Government ".

  29. Earth is an toddler that just found out that it has power, now it's testing that power to only thing it knows, to itself. We'll grow out of it as the humanity. Everyone has their internal battles to win first.

  30. i respect nick bostrom, but this new black ball narrative feels a little bit like a brilliant mind in desperate search for the next big controversial hypothesis, to keep on being the talk of the town. and in order to get there, he creates this false dichotomy.

    he even contradicts himself in this dialog. first he says that a black ball means instant doom for us all. then he goes on by saying that there might also be a golden ball that could cancel the black ball. so how many different colors of balls are in there? and how do they interact?

    as far as analogies go, this one is creative and thought inspiring, but that's about it. it holds no water.

  31. Westerners who keep murdering and plundering others at will are "worried" about "civilisation ending technologies" in "wrong hands" and want "global governance" to prevent it. Oh! The irony!

  32. We require more empathy and more intelligence as a global population, so;
    First ban sociopaths ,psychopaths and narcissists from positions of power/management or separate them from the rest of the population completely. And limit the spread of their already ingrained viral lies, both very old and very new, from spreading.
    Secondly to improve general IQ, ban anyone from breeding with anyone closer than 8th cousin; so as to create increased hybrid vigour in the gene pool.

  33. I suspect that we are at greater risk from societal collapse arising from poorly functioning social systems and the interaction with the environment than from some sort of Pandora's box technological black ball. For example, we can't get rid of fission and fusion weapons, yet struggle to benefit from safer and cheaper fission technologies for power generation, and so far practical exploitation of fusion for energy production remains elusive. Rather than using improvements in photovoltaics and batteries to raise the resilience of individuals and small pockets of folks to natural disasters, we are using them to make the overall electrical power distribution system less resilient in some sort of potlatch social ceremony.

  34. Wow. It didn't take very long at all in his proposed solutions to have a police state. He should have just introduced himself as being part of the Fear Party.

  35. Why is Bostrom so focused on this 1 singular hypothetical black ball that we might draw out? What about the countless balls that are already out of the bag that all have their own shade of grey, and are cumulatively contributing to a trajectory toward civilisational collapse? Literally every technology and invention we utilize today is contributing toward ecological collapse, climate change, unsustainable resource management, pollution, you name it. It's much more likely that civilisation collapses due to the 1000s of shades of grey balls we're already collectively juggling, than to 1 black ball ie. wayward superintelligent AI. Pandora's box has ALREADY been opened, trying to put undue restraint on emerging technologies in an effort to stop ourselves from picking that 1 black ball, might be equivalent to shutting Pandora's box at the last minute before hope has a chance to come out of it.

  36. how to destroy it? I tell you how, being arrogant and barbaric towards other cultures and people and better demonizing them in the name of Freedom of Speech.

  37. This discussion shows a total ignorance of how this universe works.
    The Greeks knew that over 2 millennia ago – Prometheus stole fire from the gods and it didn't pan-out well.
    And here I listen to Prometheans .. and wonder if they ever read history.
    Entropy is the main-game. Always was.
    Entropy has in it a self-limitation.
    That is why the Logistic Map appears in everything.
    Chaos creates order, and order creates chaos . they are the same thing.
    Notions of "black balls in the urn" are no more than accepting the truth of chaos.
    However, it seems like hubris to claim that this particular generation of humans can pick the "black-ball" without knowing the math.

    Chaos often does not include the black-ball.
    It all depends upon the "r" of entropy.
    And no one has yet delivered that.
    Until someone does? There will be no black-ball.
    In computer sims of the Logiststic, somewhere over 4.something crashes the sim – could be Fegenbaum's constant .. who knows?
    But in the Mandelbrot set, nothing escapes the point-of-accumulation in any dimension beyond 2.
    It is statistically unlikely that there is a black-ball in the urn.
    This universe is emergence on a boiling wave-front of self-limited entropy.
    The nature of emergence is you cannot know it till it happens.
    How about you just enjoy the ride?
    It's huge fun, and most die before they even tasted what fun is?
    And that's the real tragedy.
    Sure. the local-minimum is essential .. but please don't spend too much time in one .. they are ultimately toxic.

  38. The highest technology has NO technology. Nature operates by this technology that has no technology. Man operates by technology that is accumilated, amassing “know how”, driven primarily by the profit motive. It doesn’t matter how much “know how” is accumilated, it will never, never, never exceed the technology that has no techology that Nature operates by. Nature takes EVERYTHING into account at the same time, represented by ABCD. Man operates A-B-C-D, thinking linearly, in steps, which precludes the ability to consider the infinite number of probabilities At The Same Time. So Man is essential doing things blindly, hence putting Man and The Biosphere in existential danger. One of the advantages of being as old as I am, having seen Man try to destroy the earth all my life, there is a chance of leaving this life before blind scientists/whores destroy the earth thinking they can take the money with them…

  39. I believe this mans views of possible outcomes for the future both somewhat realistic, but lacking in imagination and faith in what humans can accomplish if we work together.

    He said there are only 3 option, but there is another. Abandon the monetary system, abolish all militaries (government assassination squads) and make it highly illegal for any country to have one. The only military that should exist is the UN military who should be funded by every single country in the world, n whose sole purpose is to hunt down and destroy any other military that threatens world peace.
    Abandon the crony capitalist system, in exchange for a more globalist agenda in which we produce the four human essential necessities (Food, water, shelter, education) in complete abundance, not regulated or stimulated by a monetary system, just non stop production, as much as we can make, for everyone to help. Before this was not possible. But with AI and Robotic technology this is now possible as they can produce those essentials for us. SO we can use the tiny bit of time we are allotted on this earth finding ways to improve humanity, instead of wasting our lives being economic slaves to the few who try to control us, funding a military industrial complex that does so many unspeakably horrible things.
    This new pattern of thinking and seeing the world can even be taught to children in schools so that they embody that mission, and use their lives as productively as possible improving humanity to it's most efficient existence in which we get to live most of our lives actually happy doing what we want. N there are many more things.

    We cannot limit our future to the constraints of past ideologies. This is our world, we choose to shape it however we want.

  40. This guy is a quack. I talk much more fluently on my channel. He is completely understanding the effects of climate change, when we reach 2C & go into abrupt climate change, the world will be miserable and uninhabitable with food shortages galore. We literally have less than 10 years to save the earth.

  41. To suggest, as the interviewer did, that the existence of treaties was what prevented nuclear holocaust in the 20th century is naive in the extreme.

  42. Useless talk.
    No matter how you cut it humans will be extinct in a cosmic blink of an eye.
    Whether we do it to ourselves us utterly irrekevant.
    We are factually and unavoidably "doomed".
    Our dependence on high functioning technology to support the masses makes us hoplessly vulnerable currently. And our dense population centers with virtual instant travel worldwide exposed us to the inevitable cataclysmic plague.
    "Climate change" is a big nothing burger in comparison.
    His totalitarian "police" monitoring us a fate worse than death also.

  43. pretty dumb for a ted talk. in a world created on ideas there is possibly one that could harm? think of the ideas you had over the years that put us in a situation where something could go wrong.

  44. Human civilization has been already destroying itself with its exponential growth, consumption and the technologies they worship.

Leave a Reply

Your email address will not be published. Required fields are marked *