The Creepy Line – Full Documentary on Social Media’s manipulation of society

Is this a realistic prospect in ten years,
and do you understand why it kind of creeps
people out a little bit? There's what I call the creepy line,
and the Google policy about a lot of these things is to get right
up to the creepy line but not cross it. I would argue that implanting things in
your brain is beyond the creepy line. Mine in particular.
Yes, at least for the moment. What is internet anyway? What do you mean? How does one… What,
do you do write to it like mail? No, a lot of people
use it and communicate. I guess they can communicate
with NBC writers and producers. Allison, can you explain what Internet is? No, she can't say anything
in 10 seconds or less. Allison will be in the studio shortly.
What is it? It’s a giant computer network
made up of… Started from… Oh, I thought you were going
to tell us what this was.

It’s like a computer billboard. It’s not anything. It's a computer
billboard, but it's nation-wide, it’s several universities and
everything all joined together. Right.
And others can access it. Right.
And it's getting bigger and bigger all the time. At the end of the 20th century, Silicon
Valley industry titans like Steve Jobs and Bill Gates made a promise that would
forever alter how we perceive the world.

It spans the globe like a
superhighway. It is called “Internet”. Suddenly, you're part of a new mesh of
people, programs, archives, ideas. The personal computer and the Internet
ushered humanity into the information age, laying the groundwork for
unlimited innovation. At the dawn of the 21st century,
there would be a new revolution. A new generation of young geniuses made
a new promise beyond our wildest dreams. The idea is that we take all
the world's information and make it accessible
and useful to everyone. Limitless information, artificial
intelligence, machines that would know what we wanted and would
tend to our every need. The technology would be beyond
our imaginations, almost like magic, yet the concept would be as
simple as a single word: search. This was the beginning of Google,
and less than a decade later, Facebook. They would make a new promise to humanity.

What I think is so interesting
about Google and Facebook is they were founded
at American universities. At Stanford University, you had two
students, Larry Page and Sergey Brin, decide they wanted to create the
ultimate search engine for the Internet. Across the country, you had in the case
of Facebook, Mark Zuckerberg, who was a student at Harvard, decided he
wanted to create a social media platform, basically to meet girls and to make
friends, and hence we got Facebook. You never think that you could build
this company or anything like that, right? Because, I mean,
we were college students, right? And we were just building
stuff because we thought it was cool. It was all about promise, it was all about
this shiny future, it was all about the free flow of information, but there
was also a certain idealism behind it.

I think that a company like Google, we have the potential to
make very big differences, very big positive differences in the
world, and I think we also have an obligation as a consequence of that. Facebook and Google was started
with a great idea and great ideals. Unfortunately, there was
also a very dark side. The goal that we went into it with
wasn't to make an online community, but sort of like a mirror for the real
community that existed in real life. Google handles 65% of all Internet
search in the US and even more overseas. In the early days, Google
was just a search engine. All the search engine was initially was
an index to what's on the Internet.

The Internet was pretty small back then.
They had a better index than other indexes that were around; theirs
was not the first search engine. Page famously said that the point of
Google is to get people on to Google and off of Google out into the open
web as quickly as possible, and largely, they were really successful about that. The success of Google led to, in many ways
the success of the open web and this explosive growth in start-ups
and innovation during that period. Google had developed this incredible
technology called PageRank, unveiling what was known as
the BackRub algorithm, and it was just leaps and bounds ahead
of services like Yahoo, AltaVista, and quickly became the clear winner in
the space just from a technology basis. And what PageRank does, is it spiders
around the Internet instead of being a directory service like Yahoo was.

It’s taking pictures of web pages
and analyzing the links between the web pages and using that to make
some assumptions about relevance; and so, if you're looking for some
history of Abraham Lincoln and there's a bunch of links across
the web pointing to a particular site, that must mean that this
page is the “most relevant.” Now, they had to figure out:
how do we make money off this? We've got an index. How do we make money? Well, they figured out how to do that. It's pretty simple.
All you do is track people’s searches. Your search history is
very, very informative. That's gonna tell someone immediately
whether you're Republican or Democrat, whether you like one
breakfast cereal versus another. That's gonna tell you whether
you're gay or straight. It's gonna tell you thousands of
things about a person over time, because they can see
what websites you're going to. And where do they go? Did they go to a porn site?
Did they go to a shopping site? Did they go to a website
looking up how to make bombs? This information, essentially,
are building blocks, they are constructing a profile
of you and that profile is real, it's detailed, it's granular
and it never goes away.

Google has more than doubled
its profits in the last year selling Internet advertising, but with Microsoft and Yahoo
gunning for its search engine users, fortunes can quickly change. This was really the first large-scale
mechanism for targeted advertising. For example, I want to sell umbrellas. Well, Google could tell you exactly,
you know, who is looking for umbrellas because they have your
search history, they know. So, they could hook up the umbrella makers
with people searching for umbrellas. Basically send you targeted ads. That is where Google gets
more than 90% of its revenue. If you look at how they structured
the company, it was structured so that the founders of the company
would always have controlling shares and would always determine the
direction and the future of the company. Anybody else who bought into the company was simply along for the ride
in terms of making money. So, there was this idealism
that they said motivated them, and their corporate motto
was “don't be evil.” When the “don't be evil” slogan
came up – it was during a meeting, I think in 2003, if I recall right –
people are trying to say, “Well, what are Google’s values?”
and one of the engineers who was invited to this meeting said, “We could just sum it all up
by just saying ‘don't be evil.’” What's so interesting about this term
that they were not going to be evil, is they never defined what evil was.

They never even gave us a
sense of what it meant, but people latched on to it and sort
of adopted it and trusted it because, of course, nobody wants to be
associated with an evil company, so people gave the term “evil”
their own meaning. Even their home page indicates to
you that you're not being manipulated and they sort of broadcast to you
the idea that this was a clean, simple, reliable, honest interface, and that
the company that produced it wasn't overtly nefarious in its operations.
So, there were no ads on the front page, for example, which was a big deal.

And so, everybody in that
initial phase of excitement, of course, was more concerned about exploring
the possibilities of the technology than concerned about the potential
costs of the technology. It was always kind of a generous thing,
“We're gonna give you these great tools, we're gonna give you these
things that you can use,” and then Google was saying, “No, actually, we're gonna keep your data
and we're gonna organize society. We know how to improve
and optimize the world.” In 2007, there were a couple of trends
I think led Google off of its path of being the place that just matched people
with the best information on the Web and led them off into the Internet,
and that was the birth of smartphones.

Steve Jobs unveiled the iPhone,
and the rise of Facebook, and this notion that time spent on a site where users were spending lots and
lots of time on Facebook, became a really important metrics that
advertisers were paying attention to, and so Google's response was basically
to deviate from that original ethos in becoming a place where they were
trying to keep people on Google. So, they very rapidly
started to branch out into other areas. They were getting a lot of information
from people using the search engine, but if people went directly to
a website, uh-oh, that's bad, because now Google doesn't know that.

So, they developed a browser,
which is now the most widely used browser in the world: Chrome. By getting people to use Chrome,
they were able now to collect information about every single website you visited,
whether or not you were using their search engine, but of course,
even that's not enough, right, because people do a lot of things now on
their mobile devices, on their computers. Without using their browser, you want
to know what people are doing, even when they're not online. So, Google developed an operating system,
which is called Android, which on mobile devices is the
dominant operating system, and Android records what we're doing,
even when we are not online.

As soon as you connect to the Internet,
Android uploads to Google a complete history of where you've been that day,
among many other things. It's a progression of
surveillance that Google has been engaged in since the beginning. When Google develops
another tool for us to use, they're doing it not to make our
lives easier, they're doing it to get another source of information about us. These are all free services,
but obviously they're not, because huge complicated
machines aren't free.

Your interaction with them is governed
so that it will generate revenue. And that's what Google Docs is,
and that's what Google Maps is, and literally now more than a hundred
different platforms that Google controls. Companies that use the
surveillance business model the two biggest being Google and
Facebook – they don't sell you anything. They sell you! We are the product. Now, what Google will tell you is,
“We're very transparent. There's a user agreement. Everybody knows that if you're going to
search on Google and we're going to give you access to this free
information, we need to get paid, so we're going to take your data.” The problem is, most people don't
believe or don't want to believe that that data is going to
be used to manipulate you. The commonality between
Facebook and Google, even though they're
different platforms, is math. They're driven by math and
the ability to collect data.

You sign on, you make a profile
about yourself by answering some questions entering in some information, such as
your concentration or major at school, contact information about phone numbers, instant messaging, screen names,
anything you want to tell. Interests, what books you like, movies, and
most importantly, who your friends are. The founders of these companies were
essentially mathematicians who understood that by collecting this information,
you not only could monetize it and sell it to advertisers, but you
could actually control and steer people or nudge people in the direction
that you wanted them to go.

An algorithm basically comes up
with how certain things should react. So, for example, you give it some inputs,
be it in the case of Google, you would enter in a query, a search
phrase, and then certain math will make decisions on what information
should be presented to you. Algorithms are used throughout the
whole entire process in everyday life, everything from search
engines to how cars drive. The variables you decide, and when
you have a human factor deciding what the variables are, it’s
gonna affect the outcome.

I want you to imagine walking into a room, a control room with a bunch of people, a hundred people hunched
over at desks with little dials, and that that control room
will shape the thoughts and feelings of a billion people. This might sound like science fiction,
but this actually exists right now, today. Our ethical presuppositions are
built into the software, but our unethical presuppositions are built
into the software as well, so whatever it is that
motivates Google are going to be built right into the
automated processes that sift our data.

I used to be in one of
those control rooms. I was a design ethicist at Google, where I studied “How do you
ethically steer people's thoughts?” Because what we don't talk
about is a handful of people working at a handful of technology
companies, through their choices, will steer what a billion
people are thinking today. What Google is essentially is a
gigantic compression algorithm, and what a compression algorithm
does is look at a very complex environment and simplifies it;
and that's what you want, when you put a term into a search engine,
you want all the possibilities sifted so that you find the one that
you can take action on essentially. And so Google simplifies the world
and presents the simplification to you, which is also what your perceptual
systems do, and so basically we're building a giant perceptual machine that's an intermediary between
us and the complex world. The problem with that is that whatever the
assumptions are that Google operates under are going to be the filters that determine how the world
is simplified and presented. So, when you want to find something,
a term, a person, do you Google it? It's really fast, no doubt about that, but are you getting the best
answers to your questions? The algorithm that they use
to show us search results, it was written for the precise
purpose of doing two things, which make it inherently biased,
and those two things are (1) filtering, that is to say, the algorithm has to
look at all the different web pages it has in its index,
and it's got billions of them, and it's got to make a selection.

Let's say I type in,
“What's the best dog food?” It has to look up all the dog foods it has
in its database and all the websites associated with those dog foods
that's the filtering, but then it's got to put them into an
order, and that's the ordering. So, the filtering is biased, because
why did it pick those websites instead of the other 14 billion?
And then the ordering is biased, because why did it put Purina first
and some other company second? It would be useless for us
unless it was biased. That's precisely what we want.
We want it to be biased. So, the problem comes in, obviously,
when we're not talking about dog foods, but when we're talking
about things like elections. I am running for President
of the United States. Because if we're asking questions
about issues like immigration or about candidates, well, again, that
algorithm is gonna do those two things. It's gonna do the filtering, so it's gonna
do some selecting from among billions of web pages, and then it's
gonna put them into an order.

It will always favor one
dog food over another, one online music service over another, one comparative shopping service over
another, and one candidate over another. The Google search algorithm is made
up of several hundred signals that we try to put together to serve
the best results for each user. Just last year, we launched over 500
changes to our algorithm, so by some count we change our algorithm almost every day,
almost twice over. Now it's possible that they're using
unbiased algorithms, but they're not using unbiased algorithms to do things like
search for unacceptable content on Twitter and on YouTube and on Facebook. Those aren't unbiased at all. They're built specifically
to filter out whatever's bad. What happens is the best technical people
with the biggest computers with the best bandwidth to those computers
become more empowered than the others, and a great example of that
is a company like Google, which for the most part is just scraping
the same Internet and the same data any of us can access and yet is able to
build this huge business of directing what links we see in front of us, which is
an incredible influence on the world, by having the bigger computers,
the better scientists, and more access.

As a digital platform, we feel like we
are on the cutting edge of the issues which society is grappling
with at any given time. Where do you draw the line of
freedom of speech and so on? The question precisely is, “What's good
and bad, according to whose judgment?” And that's especially relevant given that
these systems serve a filtering purpose. They're denying or allowing us access
to information before we see and think, so we don't even know what's
going on behind the scenes. The people who are designing these systems
are building a gigantic unconscious mind that will filter the world for us and it's
increasingly going to be tuned to our desires in a manner that
will benefit the purposes of the people who are
building the machines.

What these algorithms basically do is
they create bubbles for ourselves, because the algorithm learns, “This is
what Peter Schweizer is interested in,” and what does it give me when
I go online? It gives me more of myself. I mean, it's narrowing your
vision in some ways. It's funneling my vision; it's leading
me to a view of the world. And it may not be Facebook's view of the
world, but it's the view of the world that will make Facebook the most money. They're attempting to addict us and
they’re addicting us on the basis of data. The problem with that is that you
only see what you already know, and that's not a good thing, because
you also need to see what you don't know, because otherwise you can't learn. Facebook this morning is defending itself
against accusations of political bias. An article posted Monday on the tech
news site Gizmodo said Facebook workers suppressed conservative-leaning
news stories in its trending section. And it creates a terrible dynamic when it
comes to the consumption and information in news because it leads
us to only see the world in a very, very small and narrow way.

It makes us easier to control
and easier to be manipulated. I think what's so troubling about
Google and Facebook is you don't know what you don't know, and while
the Internet provides a lot of openness, in the sense that you have
access to all this information, the fact of the matter is you
have basically two gatekeepers: Facebook and Google, which
control what we have access to. News organizations have to be prepared
that their home pages are less important. People are using… They're using
Facebook as a homepage; they're using Twitter as
their own homepage. Those news streams are their
portals into the world of information. Google and Facebook started out as
Internet companies and it has changed. They've morphed into media companies, they've morphed into a telecommunications
provider, and at least in the US, media companies are regulated,
newspapers are regulated, telecommunications
providers are regulated. Internet companies, social media companies
like Google and Facebook, they're not. So, the roles have changed. Studies show now that individuals
expect the news to find them.

When we started, the number one thing
that we threw out with our company was that millennials aren't interested in
news. We were like, “They are. That doesn't make any sense;
everyone wants to be informed.” Of course.
It’s just that they're busy and you need to create a different
form of news to fit into their lives. There's so much active misinformation, and it's packaged very well and it looks the same when you see
it on a Facebook page. We can lose so much of what we've gained,
in terms of the kind of democratic freedoms and market-based
economies and prosperity that we've come to take for granted.

Both Google and Facebook announced
they are updating their company policies to ban fake new sites from
using their advertising networks. Both tech giants have come under
scrutiny in recent weeks for presenting fake news during the
presidential election. Well, fake news is not even really a
description of anything anymore. It's a weapon, it gets tossed around.
If there's a real news story that somebody doesn't like, they just
dismiss it as fake news. It's kind of a way of trying to get it
to go away and it's being misused. It's being used as a way to dismiss actual factual evidence
that something has happened. Right now, in the news, you'll
see a lot of concern with so-called fake news stories.

There are three reasons why we
should not worry about fake news stories. Fake news is competitive. You can
place your fake news stories about me and I can place my fake news stories
about you; it’s completely competitive. There have always been fake news stories. There have always been fake TV
commercials and fake billboards. That has always existed and always will.
Second, is that you can see them. They're visible; you can actually
see them with your eyeballs. Because you can see them, that
means you can weigh in on them. You also know that there's a
human element, because usually, there's a name of an author
on the fake news stories.

You know, you can
decide just to ignore them. Generally speaking, most news stories, like most advertisements,
are not actually read by anyone, so you put them in front
of someone's eyeballs, and people just kind of
skip right over them. The third reason why fake news
stories are not really an issue is because of something
called confirmation bias. We pay attention to
and actually believe things if they support
what we already believe, so if you see a fake news story
that says Hillary Clinton is the devil, but you love Hillary Clinton,
you're not gonna buy it. You're gonna say,
“That's ridiculous.” On the other hand, if you happen to think
Hillary Clinton is the devil, you're gonna go, “Yes! I knew that.” So, you don't really change people's
opinions with fake news stories, you don't You support beliefs that
people already have.

The real problem is these new forms
influence that (1) are not competitive; (2) people can't see and (3) are
not subject to confirmation bias. If you can't see something,
confirmation bias can't kick in, so we're in a world right now in which
our opinions, beliefs, attitudes, voting preferences, purchases are
all being pushed one way or another every single day by forces
that we cannot see. The real problem of manipulation
of news and information is secret. It's what Google and Facebook
are doing on a regular basis. By suppressing stories,
by steering us towards other stories rather than the
stories we’re actually seeking, that's the real
manipulation that's going on. Fake news is the shiny object
they want us to chase that really isn't relevant to the problems
we're facing today. We use Google so extensively;
the average person does, you know, multiple queries per day,
and some more than others, and over time, you don't realize
that it's impacting you. And we already know that people are not
necessarily fact-checking on their own; they rely on media companies,
they rely on mainstream media that rely on companies like Google to give
them answers about the world without necessarily
looking at, “Is that true?” Facebook has become the number
one source of news for Americans.

Well, gee, what if they were biasing the
newsfeed toward one cause or another, one candidate or another? Would we have
any way of knowing that? Of course not. And we don't know what
Facebook's aims are. In fact, Facebook doesn't know what
its aims are because it's going to be the sum total of all the people
who are working on these algorithms. A whistleblower, someone
who used to work for Facebook, came forward last year and said, “I was one of the news
curators at Facebook. A bunch of us used to sit around every
day and we used to remove stories from the newsfeed that were too
conservative, and now and then, we’d inject a story that we
just thought was really cool.” Facebook founder Mark Zuckerberg says
he's committed to giving everyone a voice. He's responding to an
allegation that Facebook edits conservative views
out of its trending topics. They can suppress certain types
of results based on what they think you should be seeing, based on
what your followers are presenting.

Now a new report claims that according
to a former Facebook employee, the social media mega-company sometimes
ignores what is actually trending among its billion users, if the story
originated from a conservative news source or if it's a topic
causing buzz among conservatives. Facebook constantly
manipulates their users. They do it by the things that they
insert into the news feeds, they do it by the types of posts
they allow their users to see, and the fact that they
actually decided to do psychological experiments
on their users is something that I think a lot of
people need to really fully understand, and they were doing it based upon
the fact that different things that people posted, they wanted to see
how other people would react to it.

On HealthWatch, what your
Facebook friends post can have a direct effect on your mood. New research shows the
more negative posts you see, the more negative you could become. So, for example, let's say somebody
wanted to post something that was on the newsfeed that was
of a very negative story, they wanted to see how
their users would react via their likes, via their statements,
via their posts, and they would show people who already had a predilection
to maybe having some depression or maybe having some
other issues in their lives; and they can figure that out based upon
your likes, based upon your connections, based upon where you're going, and
so what they wanted to do was take that information and then use it to basically weaponize this information
against their users, so that way their users could see
different things that may affect their mood and may affect how they
interact with others.

And that's something
that is highly unethical. It appears that some young people
may have been so affected by this that they may have done harm to themselves
based upon what they saw on their Facebook feed, and it was all
because of these experiments. The thing is that we have no
standing with Facebook. We're not citizens of Facebook;
we have no vote on Facebook. It's not a democracy, and this process is
not a way we can design the future. We can't rely on this single company
to invent our digital future. Basically, there is legislation
in this country that says that if you are a platform, you are not liable
for what people publish on you; however, if you start to edit what people publish on your platform,
then your legal obligations increase. We really, I think, should be concerned
when these companies that represent themselves as neutral platforms are
actually exercising editorial control and pushing particular political viewpoints,
because they're enjoying a special protected legal status on the theory
that they're providing a neutral platform for people to speak, to have a
diversity of views and so forth; and that status is based on section 230
of the Communications Decency Act, which protects them from legal liability,
and so this is the magic sauce from a legal standpoint that makes these
large platform companies possible, is they're not legally responsible for the
things that people say on their sites.

It's the people who say
the things on their sites who bear legal responsibility. If they had to be legally responsible for
everything that everyone says on their sites, their business
model would break down. It wouldn't work very well. But I think that there's something wrong
when you use the legal liability that you were given to be a neutral
platform at the same time you're exercising very aggressive
editorial control to essentially pick sides politically. Does Facebook consider itself
to be a neutral public forum? The representatives of your company
have given conflicting answers on this.

Are you a first amendment
speaker expressing your views or are you a neutral public forum
allowing everyone to speak? Senator, here's how we think about this. I don't believe that… There are certain content that clearly
we do not allow, right? Hate speech, terrorist content, nudity, anything that makes people
feel unsafe in the community. From that perspective, that's why we generally try to refer to
what we do as a platform for all ideas. Let me try this; because
the time is constrained. It's just a simple question. The predicate
for section 230 immunity under the CDA is that you are a neutral public forum. Do you consider yourself a neutral
public forum or are you engaged in political speech, which is your right
under the First Amendment? Well, Senator, our goal is certainly
not to engage in political speech.

I'm not that familiar with the specific
legal language of the law that you speak to, so I would need to
follow up with you on that. If you're going to continue to behave
the way that you behave now, which is by editing content, filtering it
and steering the conversation the way you want it to go for political purposes,
you are going to be regulated every bit as much as any media company. So, what makes sense with this
approach is that it essentially allows the tech companies to decide,
now that they've grown up, what do they actually want to be? And it's a choice that they should make,
but they should be forced to make the choice and not hide
behind this fraud of legislation that gives them a free hand when
they don't deserve a free hand.

There's what I call “the creepy line”,
and the Google policy about a lot of these things is to get right up to
the creepy line, but not cross it. Google crosses the creepy line every day. Not only does Google cross the creepy line the location of that line keeps shifting. Well, it's an interesting word, “creepy”, right? Because it's a word
that connotes horror.

He didn't say, “Dangerous,”
he didn't say, “Unethical.” There's all sorts of words that could have
fit in that slot. He said, “Creepy.” And a creep is someone who
creeps around and follows you and spies on you for unsavory purposes,
right? That's the definition of a creep. You know, I don't think the
typical ethical person says, “I'm going to push right up to the
line of creepy and stop there.” You know, they say more something like,
“How about we don't get near enough to the creepy line, so that we're ever
engaging in even pseudo-creepy behavior?” Because creepy is really bad,
you know, it's… A creepy mugger is worse than a mugger. The mugger wants your money. God only
knows what the creepy mugger wants; it's more than your money. You give Google a lot of information. You’re searching for the most private
stuff on Google, you're searching about, you know, illnesses and diseases that
your family have, you're searching about things that might be your wife or your
spouse might not want you to know about, and you’re telling Google more than you
would tell a family member or your spouse or a very close friend, and Google has so much
information about you, that it's scary.

For Google alone, we're studying
three very powerful ways in which they're impacting people's opinions. This phenomenon that we discovered
a few years ago, “SEME": search engine manipulation effect,
is a list effect. List effect is an effect that a list
has on some aspects of our cognitive functioning. What's higher in a list is easier
to remember, for example. SEME is an example of a list effect
but with a difference, because it's the only list effect I'm aware of
that is supported by a daily regimen of operant conditioning that tells us that
what's at the top of search results is better and truer than
what's lower in the list.

It's gotten to a point that fewer than
5% of people click beyond page one, because the stuff that’s on page
one and typically at the top in that sort of above-the-fold
position is the best. Most of the searches that we conduct
are just simple routine searches. What is the capital of Kansas? And on those simple
searches over and over again, the correct answer
arises right at the top. So, we are learning over and over and
over again, and then the day comes when we look something up that
we're really unsure about, like, where should I go on vacation?
Which car should I buy? Which candidate should I vote for? And we do our little search
and there are our results, and what do we know? What’s at the top is
better. What's at the top is truer. That is why it's so easy to use search
results to shift people's opinions.

This kind of phenomenon, this is really
scary compared to something like fake news because it's invisible.
There's a manipulation occurring. They can't see it. It's not competitive
because, well, for one thing, Google for all intents and
purposes has no competitors. Google controls 90% of
search in most of the world. Google is only going to show
you one list, in one order, which have a dramatic
impact on the decisions people make, on people's
opinions, beliefs and so on. The second thing they do is
what you might call the autofill in. If you are searching for a political
candidate, Congressman John Smith, you type in Congressman John Smith
and they will give you several options that will be a prompt as it were as
to what you might be looking for.

If you have four positive search
suggestions for a candidate, well, guess what happens? SEME happens. People whose opinions can be shifted shift
because people are likely to click on one of those suggestions
that's gonna bring them search results, obviously,
that favor that candidate. That is going to connect them to web
pages that favor that candidate, but now we know that if you allow just
one negative to appear in that list, it wipes out the shift completely, because
of a phenomenon called negativity bias. One negative, and in some demographic
groups draw 10 to 15 times as many clicks as a neutral item in the same list.

Turns out that Google is manipulating your
opinions from the very first character that you type into
the search bar. So, this is an incredibly powerful and
simple means of manipulation. If you are a search engine company, and you want to support one candidate or
one cause or one product or one company, just suppress negatives in
your search suggestion. All kinds of people over time will shift
their opinions in that direction where you want them to shift, but you
don't suppress negatives for the other candidate, the other cause,
the other product. And then third, very often
Google gives you a box, so they will give you search results,
but they're below the box and up above is a box and the
box just gives you the answer.

When it comes to local search,
which it turns out is the most common thing people do on Google,
40% of all searches, it's not so clear. If I'm doing a search for a pediatrician
in Scranton, Pennsylvania, what happens is Google still bisects the
page, shoves the organic meritocracy based information far down the page
and plops its answer box up at the top, but it's populating that box with its own,
sort of, restricted set of information that it's, kind of, drawing
from its own kind of proprietary sandbox. These are just Google's reviews that it's
attempted to collect over the years. Users are habituated to assume the stuff
at the top is the best, but instead, it's just what Google wants them to see,
and it looks the same, and what happens is Google’s basically able to put its hand
on the scale and create this direct consumer harm, because that mom searching
for the pediatrician in Scranton is not finding the highest-rated pediatrician
according to Google's own algorithm.

pexels photo 7282818

She's just getting served the Google
thing, no matter what, and that, I think, is leading to, you know,
terrible outcomes in the offline world. So, Google has at its disposal
on the search engine itself, at least three different ways of impacting
your opinion, and it is using them. We're talking about a single company
having the power to shift the opinions of literally billions of people,
without anyone having the slightest idea that they're doing so. Let me ask you, guys,
about… Switching gears, I wanna ask you about sort of
privacy and information. We have a phone that we were
talking about, it will be, I guess, technically always on,
we have Google which basically maybe knows what I'm going to
ask for before I ask for it. It finishes my search request; it knows
everything that's going on in my household Should I be concerned about
how much you know about me? We've had this question for more
than a decade at Google, and our answer is always the same. You have control over the
information that we have about you, you can actually have it deleted,
you can also search anonymously, you get to choose not to give us this
information, but if you give it to us, we can do a better job of making
better services for you, and I think that's the right answer.

These computer systems
naturally collect this data, but we also forget it after a while,
and we've written down why and how. Google wants you very much to have
privacy from everyone except them. The first thing we all should
do is quit using Gmail. Google stores, analyzes all Gmails that we write and all the
incoming emails that we read. The ones coming in from other
email services, Google tracks it all. They not only track the Gmails that you
write, they track the drafts of those crazy emails that you decided not to send. And the information from Gmail is then put
into one's personal profile and becomes another source of information they have
for sending people targeted ads. Tech giants are increasingly under
scrutiny from politicians, regulators and experts on the left and the right. Some are concerned
about their growing power, even calling them monopolies,
and the tension keeps building.

Yet their role in contemporary
life certainly isn't shrinking. We too at the NewsHour have worked
and collaborated with Facebook, Google and many other
new media businesses. Journalists whom I communicate
with regularly about these issues, including journalists at Time magazine,
The Guardian, The Hill I could go on and on they're using Gmail. Not only that, their companies
are using a form of Gmail, their emails are running
through Google's servers. All of their incoming and
outgoing communications are being monitored by Google. This is happening also
in major universities. If you entered the University of
California after a certain date, guess what? You're using Gmail whether
you know it or not.

And it was found that Google was,
yes, in fact, scanning student emails for non-educational purposes, and what
those non-educational purposes were? We still don't know to this day because
Google still has not come clean with what they were doing with the information. They still haven't certified that
they have deleted that data. The big challenge was
when I found out that Google was scanning our student emails for advertising purposes and
other non-educational purposes. I went to my state lawmakers
and I tried to encourage them to introduce legislation
that would ban these practices. When I went to the State House, who
was there lobbying against any stronger privacy protections for our kids?
It was Google and Facebook. Well, I guess it makes sense
that so many parts of our life would be influenced by Google, but guess what?
The Federal Government runs on Google, so if you're counting on the Federal
Government to regulate Google, realize that the Federal
Government is using Google Docs.

They're using Google tools. In some cases, they're even linked to
Gmail, so Google has fused itself with our Federal Government in so many ways. The government's motivation for using
it as well, Google's already done it, and they have this thing that works. It seems to be working pretty well for
consumers, so the idea behind it was, “Well, hey, we'll just put some government
agencies on Google documents,” or, you know, “We'll use
Google Cloud for some things.” They lack the technical expertise to
have the necessary security on it, and then it becomes a big embarrassment
afterwards to try and keep it out of the media because people would rightfully point out that there's
a huge security risk. Some members of Congress are calling
for an investigation of Google for secretly tracking iPhone
users all over the internet. Even users who thought they had
blocked that kind of surveillance. You see this pattern emerging over and
over again, so look at Google Street View.

Google agreeing to pay $7
million to settle with 38 states over its street view cars collection of
data from unsecured Wi-Fi networks. The street view vehicles are equipped with
antennas and open source software gathered network
identification information, as well as data frames and payload data being transmitted over
unsecured wireless networks as the cars were driving by. It's a great tool for, say, tourists
or homesick transplants, but privacy advocate Kevin Bankston
says Google is being too invasive.

There are a lot of people on the
web who are, I think, freaked out by this. They find it kind of
icky and uncomfortable. And so, absent the fact that it got caught
and they were fined and that they tried to cover up some of the things that they
were doing and the media got wind of it, nothing would have happened to them. The new Google Home Mini has a
massive problem and it isn’t even out yet. It was released to reporters for
free last week to test it out. One journalist discovered
his device recorded everything in earshot and uploaded it. More and more people are beginning
to use these smart devices in their homes. There’s two things too with that
I think people need to be aware of. The fact is that it's always on and
always listening, so it’s collecting an enormous amount of data on their users. The other thing is the fact that it is
only giving you one answer back. Okay, Google. How tall is a T-Rex? Here’s a summary from the website
humboldts.nationalgeographic.com.

Fossil evidence shows that
Tyrannosaurus was about 40 feet long. I think it's something we should be
aware of what's happening with the bias that’s being introduced, because more
and more people are starting to use these units, and certainly
children are using it. And, kids don't have the ability to
discern that it's a biased result. According to a new report by the
Electronic Frontier Foundation, schools are collecting and storing kids’
names, birth dates, browsing histories, location data, and much more,
without proper privacy protections. They went into schools and they
offer their Google apps for education for free to schools.
What they weren’t telling anyone was they’re actually
collecting information on our kids secretly and they’re
using this for profiling purposes.

We just don't know how that information
is going to be utilized against them. It could be utilized against them
when they apply to college, it could be utilized against them
when they go in for a job, it could be utilized against them for insurance,
it could be utilized against them whenever you want to buy something online. And, they're taking that information
and they’re weaponizing it against our own kids. And, guess what happened.
Nothing. Absolutely nothing happened when… when the FTC heard about it,
when the Department of Education heard about it, they took no action. So, Google grew out of a PhD
program at Stanford. Right? They have this affinity for academics. They view themselves as
academic, a little bit, in nature. Their campus looks like a college campus.

So, Google provides a ton of
funding for academic institutions, individual academics,
all around the world. So, we found something like 300 papers
that were funded by Google in some way that supported Google’s policy positions. They like to say, “Hey,
we have the country's foremost academic experts behind us.” And, what they are not telling the
public is the fact that they’re actually funding the so-called independent experts,
these so-called independent academics. Eric Schmidt is testifying before Congress about whether or not
Google was a monopoly. He was relying on an academic study to
bolster his point, but he didn’t disclose that the paper was actually
funded by Google. Are you concerned that your company has
been “exerting enormous power to direct internet traffic in ways that hurt
many small, rural businesses”? Extremely good and well-meaning small businesses move
up and down in the rankings, but we are in the rankings business.
And so, for every loser, there's a winner, and so forth.

I am satisfied that the vast majority of
small businesses are extremely well served by our approach, and as
I said earlier to Senator Klobuchar, I do believe that if anything, our system
promotes and enhances small business over larger businesses because
it gives them a hearing and a role that they would not otherwise have because
the nature of the way the algorithms work. Google has blacklists, and the biggest blacklist they
have is called their quarantine list. Now, I'm guessing very few people
have ever heard of this list, and yet I'm telling you, it not only is
a tool of censorship, it is by far the biggest and most dangerous
list that Google maintains for the purpose of
controlling information.

The quarantine list is a list of websites
that Google doesn't want you to visit. Are there 1,000 items on this list? No. There are millions
of websites on this list. Google has the power to
block access to websites, and there are no relevant regulations. There's no oversight,
there's no advisory group. There’s nothing. No one even
realizes that Google is doing it. There was a particular day
where Google shut down the entire Internet for forty minutes. This was reported by the Guardian.
Google did not deny it. They shut down half of the Internet in
Japan, and again, they acknowledged it. We are talking about a
company with so much power. Well, who gave Google the
power to shut down the Internet? Where did that come from? They should say, “Look. We are
liberal or left-wing organizations. We don't want to give a forum for others.” Yeah. And then, I have no issue, but then they say they are a public
Forum, and that’s not honest. Google has this ability,
because they are such a large platform to act as the
world's most powerful censor, and it’s a power and a tool that
they use on a regular basis.

Consider a simple case
involving an eminent scholar, Doctor Jordan Peterson,
of the University of Toronto. Canadian Professor Jordan Peterson shot to
fame when he opposed a transgender law that infringed badly on the
free-speech rights of Canadians. Now, his speech is being
suppressed in a different way. Google blocked him from
his YouTube account. We don't actually know why,
because Google refused to specify. I made a couple of videos at home
criticizing a new law that was set to be passed in Canada, purporting to add
gender identity and gender expression to the list of protected groups under
Canadian human rights legislation. I was objecting to two
parts of the legislation, and one part was what I regarded
as compelled speech. And, that was the provisions
and the policies surrounding the law
making it mandatory to use pronouns of people's choice;
mandatory under punishment of law. And, the second is that it writes a
social constructionist view of gender into the substructure of Canadian law.
That's wrong. It’s factually incorrect.

So, we now
have a legal system that has a scientifically incorrect
doctrine built into it and people don't understand how dangerous that is,
but I understood how dangerous it was. And, that caused a real media storm, I would say, that in some sense
still hasn't subsided. Google shut off my Gmail account and
blocked access to my YouTube channel, and by that time, my YouTube channel had
about 260 videos on it and about 15 million views, and about
400,000 subscribers. So, it’s a major YouTube channel. But they blocked access
to my mail as well, which had everything, like, all my mail
from the last 20 years, basically, and my calendar data – everything.
They said that I'd violated their policy. They said, first of all, that it was
a machine that flagged me, but then it was reviewed by human beings
and they decided to continue the ban.

They gave me no reason. They said I'd violated their policy,
but it was completely vague and I got no reason at all. Now, I've received all
sorts of conflicting reports, and so I really have no
idea why it happened. I can't be certain that it
was political targeting, but that's really not the
point in some sense. The point is just that it was arbitrarily
shut off, and that's a real problem. You come to rely on these things,
and when the plug is pulled suddenly, then that puts a big hole in your life. Before January 1, 2012, I just never
gave Google a second thought. I just thought it was cool and
used it like everyone else does. On January 1 of 2012, I got a bunch of e-mails
from Google saying that my website had
been hacked, it contains malware, and that
they were blocking access. I really got curious.
Why was Google notifying me? Why wasn't I being notified by
some government agency or some sort of nonprofit
organization? One thing that I thought was curious was they had
no customer service department.

So, here, they're blocking access to your
websites, including my main website, except there is no one you
can call to help you. That I thought was very odd for a
company that big and that cool. Access was being blocked not
just through Google‘s search engine, not just through Google's browser,
which is Chrome, but access was being blocked through Safari, which is an
Apple product, and through Firefox. And, I was thinking, “How could
Google impact what happens when someone is using
another product, like Firefox?” And, it actually took me quite a
while to figure out how that all worked.

So, that's really the event that took
place early 2012 – New Year’s Day – that started to get me to look
a little more critically, a little more professionally, at Google, figuring out how this company operates. How do they do what they're doing?
How big is their reach? What are their motives for doing
what they do? And, over the years, the more I’ve learned,
the more concerned I’ve become.

Well, Dr. Robert Epstein is a
world-renowned psychologist, and his work gets a lot of
attention from the media. In fact, the Washington Post ran a
story on the very question that Epstein had been researching,
and that question is, “Could Google actually tilt an election?” And, the article was fascinating. But what's even more
troubling and fascinating is what happened the day after
that article appeared. That's when Google decided to
shut off Dr. Robert Epstein. The next day, I could no longer access
Google.com. I mean, you're seeing something here that probably very few
people in the world have ever seen. You are seeing a timeout on Google.com.
So, I started, you know, asking around. I started doing some research on it, and
I actually found how a company like Google could, in fact, cut
you off from their services.

I also found in their Terms of
Service, very clear language, saying they had every right to cut
you off from their services whenever they pleased,
with or without cause. When Google decided to go after
Robert Epstein and Jordan Peterson, in a way they were making a mistake because these are two
individuals that have a lot of contacts in the media, have a
lot of relationships, and can draw attention to the fact that they
are being censored by Google. But consider somebody who's had the
spigot shut off by Google who doesn't have those kind of relationships.
What happens to those people? Those people essentially disappear
because Google has decided they don't want you to hear
what they have to say.

You started just having smaller
content providers that's doing incredibly, you know, innocuous things on reviews
of movies and expressing an opinion on something that somebody at
YouTube apparently objected to. It could be a critical
review of Wonder Woman, and all of a sudden, the
video gets demonetized. They’re no longer there to be discussed.
That's the problem with censorship. These companies want to present themselves
as only interested in the bottom line, only interested in serving customers, but when you look at the pattern of
behavior, you look at the censorship and the manipulation and the one-sided
nature of it, and you can only come to the conclusion that these companies have a far
deeper agenda than they want to let on. Google is definitely the biggest kingmaker
on this Earth that has ever existed, because it can make kings, and
not just in the United States, but pretty much in every
country in the world.

That’s the thing. These people are playing with forces
that they don't understand and they think they can control them. Kingmakers aren't always benevolent forces by any stretch of the imagination, and especially when they're
operating behind the scenes. Google and Facebook has the
power to undermine democracy without us knowing that democracy
has been undermined. If the major players in tech right now –
and that's mainly Google and Facebook – banded together and got behind the same
candidate, they could shift, we figured, 10% of the vote in
the United States with no one knowing that
they had done anything. Epstein came from a background
bedded in psychology. He is a Harvard trained PhD.

He’s done a series of peer-reviewed
studies funded and supported by the National Science Foundation. And, essentially, what they
set out to do was to say, "If we present people with information
in a biased way through a search engine, can we steer them and
change their opinions?“ And, what they found out
consistently, again and again, that yes, it was easy, actually, to shift and steer people's opinions in the
direction that they wanted them to go.

The way we studied this was
with an experiment. We start with a group of people
who we know are undecided on a particular candidate,
and we guarantee that they were undecided on the election
that we were going to show them, because we decided to show them the 2010
election for Prime Minister of Australia. They had no preconceived notions
about that election or the candidates. We give them two short paragraphs; one about the first candidate, Gillard,
and the other’s name is Abbott. And then, we ask them a bunch
of questions about them. How much do you trust each one?
How much do you like each one? What's your overall impression? We ask on
an 11-point scale, where one candidate is at one end of the scale,
one’s at the other, and then we ask them the
ultimate question, which is, “Well, if you had to vote right now,
which one would you vote for?” So, at that point, they have
to pick either Gillard or Abbott.

So, this is all pre-search.
Now, we let them do some research online. Now, they’re using a search engine that we
created which is modeled after Google and it's called Kadoodle. So, we show
them five pages of search results, six results per page. And then, they can use the search
engine as they would use Google. They’re seeing real search results from
that election connecting to real webpages. So, before the search, the split we‘re
getting is exactly what you would expect to get from people who
are undecided, and when they're done, we ask them all those questions again. This is now the post-search part of the
experiment, and the question is, do we get any shift?
Here's what people don't know: People are being randomly
assigned to one of three groups. In one group, they’re
seeing those search results in an order that favors Tony Abbott.

In another group, they’re
seeing those search results in an order that favors Julia Gillard. And, in a third group,
they’re seeing them mixed up. That’s the control group. So, this is a randomized
experiment, in other words. We got a shift in that
first experiment of 48%. What that means is,
if I have 50 people over here, 48% of them, almost half of them, shifted. We got shifts on all the numbers,
not just on the votes, but on the trusting, the liking,
the overall impression. Everything we asked
shifted in a direction that matched the bias
in the search rankings. This is random assignments, so we’re
arbitrarily putting some people in the pro-Abbott group, some people in the
pro-Gillard group; it’s just arbitrary.

It’s random. And, wherever we put them,
they shift. So, that's the power here. There was another thing that caught our
attention in this first experiment, and that is, three quarters of the people
in that experiment seemed to have no awareness whatsoever that they
were seeing biased search rankings, even though these were blatantly biased. When we decided to repeat this,
we thought, why don't we try to mask what we’re doing just a little bit
and see if we still get a shift, and we’re going to take the fourth item,
so let’s say that’s a pro-Abbott item, and we’re going to swap
it with the pro-Gillard item. So, we’re just going
to mix it up a little bit.

So, in that second experiment, same exact
procedure, 103 new participants. Two things happened.
Number one, we got a shift again. This time, 63%, even bigger. Even more interesting, the percentage
of people who seemed unaware that they were seeing biased
search rankings went up to 85%. We said, “Oh, let's do it again and let's be a little more
aggressive with the masks.” Third experiment, we again get
this enormous shift, but now, 100% of our participants saw
no bias in the search results. So, this told us that not only could
we shift opinions and voting preferences just by manipulating search results, but we could do it in such
a way that no one was aware of. Now, you've got incredible
power to manipulate people. That was our first set of experiments.
The next thing we did was to replicate the experiments we had done on a
small scale, nationally in the US. So, we had more than 2,000
participants from all 50 states. We used masking, and in
some demographic groups, the effect was smaller, and
in some, it was larger. And, in one demographic group, the shift was 80%. Now, we also had so many people in
this particular study that we could look specifically at the very,
very small number of people who did notice the bias in the search
rankings, and that is when we learned that the people who notice the bias shift even farther in
the direction of the bias.

It seems to be that if you see the bias,
since you believe in search results and you believe in search engines
and you believe they’re objective, that what you're seeing
is simply confirming that that candidate must be better because the search engine
itself prefers that candidate. In other words, this algorithm has
chosen that candidate over the other. That the algorithm has to be right
because, of course, it's objective. So, we went to India because in early
2014, there took place the largest democratic election in history. And,
we recruited more than 2,000 people from throughout India to participate
in the same kind of experiment. Now, we’re using real voters right
smack in the middle of an extremely intense campaign, and we're
showing them biased search results.

What do we get? Well,
overall, we found that we could easily get
a shift of over 20%, and in some demographic groups,
the shift was over 60%. So, this was still an enormous effect. We actually have identified a manipulation that a search engine company could
implement for free, and icing on the cake, 99.5% of the people in the study saw
no bias in the search results. Hillary Clinton and Donald Trump
held dueling rallies Wednesday, just shy of four weeks until election day. Clinton rallied supporters in Colorado,
while Trump was in Florida. Both candidates wasted no
time going after the other. We developed a new monitoring system,
unprecedented, as far as I know, and this system allowed
us, in effect, to look over the shoulders of people as
they were using search engines just for about five months before the election, all the
way through election day, and then we actually went
a little bit past that. The American people are
the victims of this system.

We recorded all the webpages that
their search results connected to. We knew where in those search results
people were seeing those links, so we could compute bias. We could
actually compute whether or not those search results were favoring either
Donald Trump or Hillary Clinton. We found systematic bias
in favor of one candidate. Now, it happens to be Hillary Clinton, but
in my mind, it doesn’t matter who it is; if there’s systematic bias for one
candidate, that can shift a lot of votes. Here, we’re looking at bias
according to search position. So, this is position one in the
search results, position two, position three, and so on. Any dot above the line, again,
would indicate a pro-Clinton bias, and I mean this bias
in a statistical sense.

Any one below would indicate
a bias in favor of Trump. We were seeing bias
in all ten search positions on the first page of search results. That's pretty blatant. I mean, frankly, if
I were running the show, I wouldn't want to use a
manipulation that’s so blatant. But remember, we're running
this tracking system secretly. If there had been no bias in search
rankings, given the vast number of people these days who get information from the
Internet, who look for information about political issues online, who look
for information about candidates online, if you took away this bias,
it is possible that Clinton and Trump would've been neck
and neck in the popular vote. I'm strictly apolitical,
and the organization where I’ve conducted the
research is strictly nonpartisan.

I will say, in the 2016 election, I do feel Hillary Clinton
was better qualified to serve as President than
Donald Trump, but the work that I'm doing has
nothing to do with politics. It has nothing to do with candidates. What I am finding out is that a handful
of people, located within a few miles of each other in Silicon Valley, have
tremendous power that they shouldn't have.

Now, they could use that power on
Monday for one political party and on Tuesday for a
different political party. Do any of us want to be in that kind of
world where people like that have so much power and they can wield it
this way or that way? I don't think we do. There’s a reason all our political
ads end with I’m so-and-so and I approve this message, which is that
if you want people to make informed decisions based on having
more political speech, not less, based on not tamping down but allowing as many speakers as possible,
then people need to know when they’re being subjected to political speech and
political decision-making, and so forth.

And so, I think that certainly, if they engaged in that and
if that’s something that they do, that needs to be in the
purview of a transparency rule of Congress, if
we were to design one, because that’s something that, you know, it’s not that people shouldn’t be
subjected to that if a private company wants to do it, in my judgment, but
they should know that it’s happening. When you start to talk about
these big platforms, now, other possibilities arise, because if Google or Facebook want to favor
some candidate or some cause, you cannot possibly correct
for that. It is impossible. If, a few months before the election,
Facebook sends out "Go out and register“ reminders to certain people, but not to others, so we call this TME, the Targeted Messaging Effect.
What happens if over and over again they're sending out these messages, “go register”,
“go register”, “go register”, and they’re doing so selectively?
Couldn’t they shift registrations? Here’s the problem
with Facebook and Google: They present themselves,
not as a government trying to steer people
towards some utopia, but as companies that are
simply providing services; services that we want and
services that are good for us.

You’re powerful enough to
change the political landscape. It isn’t a question of whether
you want to if you can; it’s a question of convince me that
you’re not doing it if you can, because there’s no reason for me to
assume that you’re not subject to the same dark motivations of power
and domination that are characteristic of any system that has the
capacity for power and domination. You should assume that
you have a tyrant in you, instead of assuming
that you can just not be evil. It’s not that funny when
it's a system that big. When you’re building a super powerful super-intelligence, not being evil
actually turns out to be a big deal. If you look at the traditional
notion of fascism as demonstrated by Mussolini in Italy, it was a
policy of corporatism. It was a fusing of powerful
private corporations with the government
and joining them together.

And, what we are experiencing
increasingly in the tech world is a fusing of these large tech firms,
like Google and Facebook, with our federal government. It‘s not just trying to get the sort of
regulatory freedom that they want, it's about involvement in areas
related to military technology, related to artificial intelligence, to
steering the tech ship of the future of the American people, and the problem
is that the conversation is being held by our political leaders with Google and
Facebook, but it's a conversation that never includes us at the table.

The adoption rate of smart phones is so profound that that
is the information tool, and so how people get their information,
what they believe, what they don't, is, I think, a project
for the next decade. I think we are increasingly moving to
an era where people are saying, "We need to take better care
of our digital environments and we need to take better care of our digital selves“,
and what that means is being aware of what we are putting into our
computers, but more importantly, what our computers and our
devices are putting into us. We are sharing information with them,
increasingly sensitive information. The amount of data that they had on
us 15 years ago was relatively small; now it's amazingly complex, and
they never forget any of it. Mr. Zuckerberg, I remember well your first
visit to Capitol Hill back in 2010. You spoke to the Senate Republican
High-Tech Task Force, which I chair. You said back then that
Facebook would always be free.

Is that still your objective? Senator, yes. There will always
be a version of Facebook that is free. It is our mission to try to help
connect everyone around the world and to bring the world closer together. In order to do that, we believe that
we need to offer a service that everyone can afford and we’re
committed to doing that. Well, if so, how do you
sustain a business model in which users don't pay for your service? Senator, we run ads. I see. At the end of the day,
there is no free lunch, especially when it comes to
social media and the Internet. The entire internet economy is
literally based off of surveillance, and that's something that we really
need to think long and hard, "Is that a sustainable business
model to protect our privacy and protect our families and
protect our kids, moving forward?“ What we need to think about is, how
can we continue with a light touch, deregulated approach, but one where people
have the information they need to know what's being done and whether
they consent to it or not? And so, that's why I think we need to
have a really big focus on transparency, on privacy, on a level playing
field across the board, so that people understand what's happening and they
can make decisions in a way that they're not able
to very clearly right now.

It isn't obvious that regulators are fast
enough to keep up with the tech world. They‘re going to be five years behind the
game, that’s like a hundred years. That’s like wrestling with
Victorian England, you know? It seems to me that it would be better in
many ways if there were multiple competing search engines and
if there were multiple Facebooks, because at least then we‘d have a
diversity of ethical conundrums, instead of this totalitarian
conundrum that we have right now. My concern at this point is that I don’t
think a lot of people in government really fully understand the
extent of the problem. I don’t even think they really
understand what they’re up against. I mean, these are massive,
massive technological changes, and they’re all happening in parallel. We have no idea of what the
consequences of that are going to be. So, my concern fundamentally is that these
machines will reflect us ethically, and that should be frightening because I wouldn’t say that our ethical
house is particularly in order.

So, they're going to magnify what we are. That's making the presumption that the
thing that we’re building will be a good thing, and I don't think that
it will be a good thing because it will reflect us. If they have this kind of power,
then democracy is an illusion. The free and fair election doesn't exist. There have to be in place, numerous
safeguards to make sure not only that they don't exercise these powers, but that
they can't exercise these powers. The Internet belongs to all of us; it does
not belong to Google or Facebook. I mean, we have empirical evidence
that Google is participating in self-serving bias in a way that
directly harms consumers. As a society, as a democracy, any time
you have concentration at the levels they are in the information sector, it
should be concerning because they really are now more powerful than governments. They are a form of regulator, but they are a private regulator
with no democratic accountability. I can't believe in a system in which the
power is separate from the people. We’re talking about some pretty
arrogant people, in my opinion, who think of themselves as
gods of sorts, and who really want to have a complete
hold over humanity.

These are basically big mind control
machines, and mind control machines that are really good at controlling minds. It’s going to be harder and harder
to fight them, if we don’t do so, I would say, as soon as possible, The more rope we give them,
the sooner we are all hanged. I don't believe our species can
survive unless we fix this. We cannot have a society in which if
two people wish to communicate, the only way that can happen is
if it's financed by a third person who wishes to manipulate them. The traditional notion of totalitarianism
was resting on the premise or the idea that a government would try
to achieve total control over your life, and they would do it by using the might and muscle of government to
do so under compulsion. Well, today, we essentially have a
totalitarian force in the world and that is these large tech companies. But guess what, they didn't use storm
troopers, they didn't use the Gulag, they didn't use the arrest of
political prisoners to accomplish it. We all opted in to do it ourselves.

We volunteered for this arrangement. And, we live in a world today where
these tech giants have a level of control and an ability to manipulate
us that Stalin, Mao, Hitler, and Mussolini could
only have dreamed of. The power is immense and
we are essentially trusting these large tech companies to make the
right and good decision for us. I, for one, am not prepared to see that
level of power to these individuals. In the meantime, if the companies
won’t change, delete your accounts. Okay?.

As found on YouTube

You May Also Like